Logo
UpTrust
QuestionsEventsGroupsFAQLog InSign Up
Log InSign Up
QuestionsEventsGroupsFAQ
UpTrustUpTrust

Social media built on trust and credibility. Where thoughtful contributions rise to the top.

Get Started

Sign UpLog In

Legal

Privacy PolicyTerms of ServiceDMCA
© 2026 UpTrust. All rights reserved.

social psychology

  • dara_like_sara avatar

    What outcome do you hope for? I was on a call for the last hour talking with a friend about supporting a vision he has. 

    At the end of the call, he asked "what are you hoping to get out of this?"

    I found the question really hard to answer in a way that makes any sense at all.

    My answer to the questions comes in feelings, images, and body sensations. I see a bowl overflowing, I feel a magnetic pull, I experience a sense of duty, I follow synchronicities, I release and this is what came to me. One of my purposes in this life is to bring people together, especially really smart people. I don't have a goal, and if I did, I am sure it would change. I want to be of service to a vision of the best future possible.

    I'm after the experience. My vision will fill out along the way. 

    When I can't frame the answer in an intelligible way, it causes doubt- maybe this isn't right? Maybe my intuition would have a clearer answer if this were the right path. Maybe I'm cutoff from what outcomes I hope for and need to work on getting more in touch with my desires. Am I too scared to name a desired outcome for fear of being letdown if it doesn't come true?

    But I want to try on that the question may just be the wrong question for me. Or that my answer to the question isn't going to sound like what I've heard from other people. 

    Sharing here, and open to others experience of answering this question. How do you know what you want? 

    And if you know me, happy to hear your perspective on my specific psychology or what you think is going on 🤔

    CoachWebb13•...

    Sounds like your friend thought you might have ulterior motives. Maybe they are not confident in their vision 

    social psychology
    interpersonal communication
    friendship and trust
    Comments
    0
  • angle•...
    Large Scale societal cooperation begins with small scale relational skills. If we can't talk honestly with our partners, children or neighbors it becomes unrealistic to expect productive dialogue at political/insituional levels....
    social psychology
    conflict resolution
    interpersonal communication
    civic engagement
    community building
    Comments
    0
  • sness•...

    Science Says: Have Deeper Conversations

    Hi again UpTrust! I'm Sara Ness, Resident Research Nerd (aka Research Director) at the social health nonprofit SeekHealing, CEO of Authentic Revolution, and co-founder of the OG Austin Authentic Relating and Circling communities....
    social psychology
    behavioral science
    interpersonal communication
    vulnerability and emotional openness
    Comments
    0
  • UpTrust Admin avatar

    Transcript from Tim Urban AMA: Big History, Emergence, and the Future: From the Big Bang to AI and Civilization’s Stakes. A recorded conversation with Tim Urban about Big History, Big Futures, and the Battle for Our Better Minds Hosted by Jordan Myska Allen for UpTrust


    The Forthcoming Book: Big Bang to Heat Death

    Jordan: Very happy to have you here. So the forthcoming book—I'm excited. We have fun conversations all the time, but I'm especially excited to talk about this. It's about big history and big future. What would you say it's about?

    Tim: Basically, it starts at the Big Bang, where I try to explain inflationary theory—which is extremely confusing. And then I go through the formation of the universe, the formation of the Earth, the origin of life and evolution, human evolution, ancient human history, modern human history, and then we hit the future. There's a bunch of cutting-edge stuff going on right now, and then where it might be going in 50, 100, 200 years. And then we go into the far, far future and end at the end of the universe—the heat death of the universe, which is really far away.

    Jordan: What's the through line? Are you looking at free energy? What do you think?

    Tim: I talk about energy, but I think the book you're referring to is David Christian's Origin Story, which we've talked about. Amazing book. The through line here is more about the big game-changing moments—essentially for life. If you go to energy, you can zoom out more and talk about even pre-life, and life is a step along the way. But within the part of the book from the origin of life to the end of humans—which is about 90% of the book—it's about: if we zoom way out, what do we see?

    We might see the origin of life as obviously a major step. Going from single-cell to multicellular—major step. Maybe photosynthesis, or sexual reproduction, or aerobic respiration, or one of these things—these are leaps, but maybe not mega-leaps. Going to multicellular is suddenly a mega-leap where the whole game changes. Now you have animals competing on a totally different playing field, a totally different game than the single-cell world.

    People can quibble with that—some would say going from archaea to eukaryotic cells is an even bigger deal. I don't know, but I'm calling multicellularity the first major leap after the origin of life. And then when I get into the human story, I think it helps to think about human history and the future through that zoomed-out lens. You might say that the origin of the ability for human brains to connect into a super-brain is actually the next one—maybe right after multicellular life—which happened about 50,000 years ago. And then maybe AI is another one, happening this decade or whatever.

    There's a lot of just trying to put things in perspective. Big history goes well with a discussion of the future because it orients you to just: what's even happening here? What's even going on? What is life? What are you even doing? When you do that, you can start to see the meaning of what's happening now a little more easily.


    Zooming In: Cells, Atoms, and the Question of Life

    Jordan: This is the thing I'm so interested in, because it changes you. I had an experience hiking with my son strapped on my chest when he was about six or nine months old, walking on lava rock, and I was like, "Holy shit—this rock finally got up and learned to walk, and it's happening right here, right now, with me and my son." There's this felt connection to the whole evolutionary sequence. What has changed for you as you've done this research? You've always zoomed way out, but now you're also parenting a toddler, so you're zooming way in and way out. How has your view changed?

    Tim: You can also get a lot of mind-blowing reality checks from zooming in—literally to the smallest units—and zooming out to the biggest. Size-wise, time-wise, it helps.

    Zooming out on space is the more common, cliché thing. "Oh my God, we're so insignificant. If the sun is a basketball in New York, the next star is like a golf ball in Warsaw, Poland. Oh my God, so much space, we're so tiny." Okay. But zooming in is crazy. You think, okay, I'm this organism, but I'm made of cells that aren't nearly as smart as I am. They have a lot going on, but they can't do the things I can do. It's this emergent property—I'm just a bunch of cells. That's all I am. It's not like there's me and then I have cells. No, I am cells, a pile of cells. And somehow I have consciousness, the ability to reason and do all these things.

    But at least a cell is a living thing—it's the smallest unit we call life. What gets weird is when you zoom in further. What's a cell made of? Organelles, plasma. What are those made of? You keep zooming in and you see these little protein micro-machines that are twisting and turning and clamping and siphoning. It's wild, incredibly interesting—but the proteins aren't alive. They're just atoms moving because of electrical charge. So that's actually all I am: a bunch of non-living protein micro-machines. You can zoom in further—protein micro-machines are just atoms that have very little going on. They just obey electric charge. So do the micro-machines. So do the cells. So do I. It gets into free-will questions and also just: what is life? I'm made of a bunch of little things that aren't alive, so what's going on?

    The book does lots of stuff like that. If I have a thought process I find interesting, I just put it in the book, because it's fun.

    Jordan: I think about this a lot. You're talking about these protein micro-machines and atoms—we act like this is the standard empirical, scientific-materialist view. But protein micro-machines and atoms behave super differently from this table my stuff is sitting on. I almost think of it as proto-matter—the stuff these things are made of. What about proto-life? Is there just a switch where things are dead and then suddenly alive? Or is there a spectrum?

    Tim: I think it's a spectrum. The first proto-cells—because a cell can form without life. A cell membrane is just what lipids do when they're in a certain configuration; they'll form a membrane. When that happens, and inside of that are some maybe primitive RNA molecules—which come together either as a freak thing or commonly, we don't really know, just electrical charges pulling things into certain configurations—but when one has the property of being able to self-replicate, and it tends to because of electrical charges, and it happens to be wrapped in a proto-cell membrane, and things start passing through the pretty loose early membranes... before you know it, it just keeps going and one day you're like, "Oh my God, that's a real cell. I guess there's life."

    Maybe the RNA itself we could call life. Different people have different definitions. What's less interesting to me is the semantic moment, because clearly the proto-stuff is doing the same thing as life. Once something starts self-replicating and therefore undergoing Darwinian pressure—because the thing that self-replicates badly just peters out and the molecules drift away, while the things that replicate well have more offspring—evolution kicks in as soon as you have self-replication with heredity. So the parent thing makes a copy of itself, not something totally different, and then the best copies will persist. You could call it life, but some people say it's not until you get to a full cell. I don't know. But it's weird, because the actual things are just atoms that have no life—they're no different than the atoms in the table.

    When you look at the protein micro-machines, it's shockingly intricate—better than anything we could create by far. That's just billions of years of evolution. And when you talk about your kid—I now have a toddler and a baby, and I'm holding my baby up and I'm just like, so many billions of years of evolution had to do its thing to produce this miracle. This fat baby I'm holding. She doesn't know anything. She'll grow up and whine and complain because she doesn't want to eat the food—but she's a miracle. She's an unbelievable engineering miracle.

    Jordan: For all we know, it took exactly 13.8 billion years. If we're the only life—that's just how long it took the entire history of the universe.

    Tim: We don't know. It might be common and happens all over the universe—in which case it's not a miracle that it happened, but it's still incredible. Or life is unbelievably rare, and we're literally alone—in which case it's the most freak accident, a one-in-a-quadrillion thing. And then once that started, you end up with the whole biosphere. We don't know.


    The Awe of Being Alive

    Jordan: When you talk about this stuff, you get worked up into a kind of state of awe.

    Tim: Oh yeah. There's so much to blow your mind if you just think about it for a second.

    Jordan: I think this is something that's uniquely Tim and freaking great. People love your explanations partly because you put things in super unique ways, but also because you love being in awe at the world.

    Tim: To look around at the society you're in—the streets, the buildings, these incredible systems of finance and law, all this intricate meta-thing we've built—what the hell is going on? We're a bunch of primates that until recently were all living in the forest, building nothing, maybe building a tiny shelter or wrapping some fur around us. This is not normal. Civilization is completely bizarre. If you zoom out, until yesterday basically none of this existed. It just happened. What the hell?

    I can also just get jaded like anyone else, but writing a book like this is helpful for me. When I want to write about something, I put on that lens of "what's actually going on here?" and suddenly I blow my own mind thinking about it.

    Jordan: My mind is blown constantly by these things. Even if—let's say our universe is teeming with life, I know about the Fermi paradox and everything—but just for a moment, let's say it's not. And I look outside and there's a truck. And I'm like, 14 billion years of evolution created a truck.

    Tim: Yeah.

    Jordan: The truck is the cutting edge of the universe.

    Tim: And what if this is the only planet in the whole universe that has something like a truck? But it's just as crazy if it's everywhere. What if there are equivalent alien trucks? They probably use wheels too—they're bound by the same laws of physics. We know that going from single-cell to multicellular probably isn't a great filter candidate, because it emerged independently dozens of times. So things that emerged independently here on lots of different continents—if there are lots of aliens, we can probably expect to see some of those things there too. Maybe on billions of alien planets there are literally trillions of trucks. Swimming pools. Vacations going on. Students studying astronomy and looking at the Milky Way Galaxy with a different name for it. Aliens having dance parties. Aliens getting drunk. It just looks like this quiet, dead spiral to us.

    Sometimes when my wife is watching the Great British Baking Show and they're all sitting there baking and talking in their British accents, I think: aliens in Andromeda are looking at this cold, dark, quiet, silent Milky Way galaxy, and this is going on inside it.

    Jordan: It almost puts me in an altered state. I feel like I'm on drugs—in a dumb joy.

    Tim: And all this is doing is just pointing out obvious things about reality. Things that are obvious once you think about them for a second. It's literally looking with clear eyes at reality and considering what it means, as opposed to being caught up in "I have a 4 PM meeting and I've got to pick up groceries"—just pausing and lifting your head up for a second. Essentially being on drugs.

    Jordan: Kids are good for this too. I'm walking along and there are these tiny little flowers—basically weeds—covering a field. And for my kids, those are flowers. They're gorgeous. And then I'm like, "Flowers, man, that's crazy." For whatever reason, a property of the universe is that if you're flamboyant and bright, things are attracted to you and that's good for you. Beauty is this emergent property of the universe, and it's cross-species.

    Tim: And likewise, when you see a sunset and think it's beautiful, or you see a beautiful person—that's billions of years in the making. That stuff runs deep. There's obviously a cultural aspect, but I think people find sunsets beautiful all over the world, regardless of culture.

    Jordan: Or they find a mountain valley with a stream going through it beautiful all over the world, because it's baked into our DNA. That was a good survival landscape—high ground, a stream for water, lush greenery.

    Tim: And if we'd survived better in flat, gray desert, we would think that was beautiful and think nothing of mountains and streams. It's all built from evolution.


    Evolution, Morality, and the Strange Species We Are

    Tim: This reminds me—you talk about things that have been around for a few hundred million years in our evolutionary stack. I was thinking about income inequality the other day. For hundreds of millions of years, it's been "to the victor go the spoils"—this structural, Darwinian cycle.

    There are no rules, no fairness. Evolution is just run by physics, and physics doesn't care. A star blows up, goes supernova, destroys planets—the star does that because physics tells it to. Look at the animal world: basically the same thing. And then we're this weird species who has all that deeply baked into us, but also this capacity to do better—to think, wait a second, those people are just as valuable as us.

    Jordan: Physics cares through us, but not until us.

    Tim: It's weird. But the physics is generating this moral sense in us too. It's this very strange thing—the idea that those people are just as valuable as us, and it doesn't make sense that it would be okay to do something to them that we don't want them to do to us. No other species really does this. And I've gotten yelled at before for claiming this is uniquely human. People say animals have empathy. Maybe a little bit, but I don't see chimps really extending that to other tribes.

    Jordan: On the flip side, for the people arguing against you—lots of humans still don't do it either.

    Tim: Most humans, deep down—and that's the thing—we also build these structures of social pressure. I think a lot of people act good because it's rewarded: externally, people will think you're a good person; and internally, you'll feel better because you've had this moral code baked into you. But it doesn't always mean you actually feel these things.

    What I call the "primitive mind" is just: survive, reproduce, acquire resources, have power over your environment, protect your young. That runs deep—the chimp feels the same way. And then we build this entire civilization with all these social rules and pressures. And sometimes those rules conflict with or override the primitive impulses. Sometimes they don't—ambition and personal wealth can enhance society. But we do want to create a society where sexual assault is evil.

    I think society is just a weird, amazing thing. Walking around the airport, seeing all these people with their clothes on, all behaving under the same moral codes—they bump you and everyone says, "Oh, sorry about that." We're all actors on this stage.

    Jordan: I think about that when I run into someone hiking in the greenbelt in Austin, and I'm like, man, if this was only a few hundred years ago, I'd probably be thinking, "Should I kill this person?"

    Tim: Totally. Or run. But you can trust it because society runs deep through almost everyone. You can't trust everyone—that's why there are criminals. But you can basically trust almost everyone, at least in a very high-trust society like ours. There are societies with a lot less trust, and I don't think that's because those people are inherently less trusting—I think the ideology currently underlying their society doesn't bake in the trust as well.


    Social Media: From Humming Place to Candy Store

    Jordan: So this takes me to UpTrust. The primitive mind is the thing social media optimized to reward, and we want to do something different. You're a public persona, you write—how do you keep from getting audience-captured, from having the primitive mind's reward signal drive you in a direction you don't want?

    Tim: My job is easier than yours in that way. If you're a solo blogger, and early on you think, "I need to get big, so I need to do stuff that's popular even though I don't like it"—that's when you run into the problem. Because the audience you attract likes the thing you're doing, not the thing you really wanted to do. And if you try to shift, your audience says, "This isn't what I'm here for. You've changed."

    I think a lot of solo creators today do what I did, which is from the beginning, furiously do what you actually want to do. When you do that, you attract people who tend to find the same things interesting. So when I learned about AI and said, "Oh, I need to write about this," people didn't say, "This isn't what you do," because I'd already made it clear I do lots of different things. If I was interested in it, turns out they were too.

    You have the hardest job in the world, though. These social media companies—at least the first round—were like going down the junk food aisle. Cinnamon Toast Crunch, Lucky Charms, the candy aisle. Companies trying to surpass your higher mind and sell directly to the primitive mind. Wake it up and say, "I want that," and the primitive mind takes over. They make billions because primitive minds are easy to trick.

    The very early social media was actually quite high-minded. This is Jonathan Haidt's research and Jon Ronson's too—his great book on this and his TED talk. Before 2008 or so, before the like button, before retweet really took hold, it was much more friendly. You would almost admit stuff that was embarrassing, and people would say, "Oh my God, I do that too." Ronson calls it a "humming place"—the opposite of what it's become.

    Then you have these new features, and suddenly you're incentivizing totally different behavior. The algorithm starts favoring traffic and virality. What makes a post go viral is outrage, or really catering to a political tribe. In this one little experiment, the best of humanity comes out with the right system, the right incentives—and everyone loves being there. Tweak a few settings and suddenly the primitive minds wake up, because there are all these new candy wrappers, and you end up with a pretty shitty environment.

    It's not all bad. I follow a lot of awesome accounts on X that are totally high-minded—funny, talking about science, history, whatever. But you end up on the For You page and you're going to see sensationalist, bombastic stuff designed to make you emotional in a not-great way.

    I'll give X credit for trying stuff. They came up with Community Notes. They have the Grok button next to every post where you can get a summary or check if it's fake. These are modern tools. But they have so much baggage with the culture that's been built there. What I don't like is when I'm on X and I start to feel like, "Wow, on this really controversial topic, everyone agrees with me." I wrote a book on this—I'm aware enough to say I'm in an algorithmic bubble, and it makes me want to leave the site. A lot of people don't even realize it's happening. They just think, "See, everyone agrees with me. I'm so right." That's all false confidence.

    Your job is so hard because you have to reinvent health food in a world that's evolved past it. But we need it so badly.


    What's Our Problem: The Super-Brain and the Culture War

    Jordan: When you wrote What's Our Problem?, you were saying this is the biggest issue facing civilization. Has that changed as you've zoomed out?

    Tim: It's not that the culture war itself is the biggest issue. It's that the issues that will determine our future—AI governance, bioweapons and how we protect against them, what we do with genetic engineering as it gets better—these could take us anywhere from full utopia to full extinction. The range is that big, in my opinion.

    The reason I focused on how politics makes smart people dumb is that our societies weren't built by individual humans. Individual humans just aren't that smart. Put a human alone in the forest next to a chimp alone in the forest—I might bet on the chimp to survive. Society was built by this collective human super-organism that happens when our brains connect together. That's why we have all the science and tech and magical knowledge, and why it's moved so quickly.

    In certain environments that encourage high-minded discourse and a culture of disagreement, the super-brain is smarter than any one human. When mob mentality and tribalism take over, the super-brain basically shuts off and you end up with these big stomping giants trying to cudgel each other. That's what the macro emergent thing becomes.

    The smart super-organism can do anything. I have full faith that it can get us safely to the next chapter. It doesn't mean it can create AI alignment, but it can figure out if it can, and it can figure out if it's a fool's errand right now and then stop. It can make wise decisions. The dumb giants will drive us right off a cliff as a species. They have no foresight. It was bad diplomats and tribal thinking that started World War I. These things are dumb. They can't see two feet in front of their face.

    So the culture war to me is not a battle of left and right. It's a battle of what I call low-rung—this tribal, mobish culture that emerges from our primitive minds banding together—versus the higher mind that's in all our heads trying to wrest control back. How that goes will determine whether the big brain gets us to the future safely, or we drive off a cliff.

    Jordan: It reminds me—second time I want to plug Ken Wilber's Integral theory to you. He makes a similar argument in a more complicated way.

    Tim: A Theory of Everything—is that the one I should read? Everyone recommends this to me.

    Jordan: It's a good one to start. Basically, he adds a couple of layers to the same model. Instead of low and high, he has egocentric, ethnocentric, and world-centric thinking, and how they're each trying to dominate. Then he adds one that's pluralistic, which is also fighting against the rational/objective. It's the kind of Stephen Pinker very-objective worldview versus someone else saying, "But what about all these other perspectives?" I don't want to go too deep on the integral rant.

    Tim: It's really interesting. I think we need as many people as possible trying to frame this kind of thing so it becomes common, something to think about. Because the first step is awareness—just like if you want a healthier society, step one is educating people on nutrition. Getting people to understand that when a package says "low fat" and "heart healthy," that's not necessarily good for you. Build awareness, and then behavior can change.

    My mom said in the 1950s they all ate white bread all the time because it said "enriched" on it and everyone thought it was healthy. There's that early phase when you can fool people before they catch on. With nutrition, it's not going to end our civilization. But if tribalism is one of those cyclical moral panics that every few generations bring, and we're in one now—because of the technology at play, we might not have the luxury of making mistakes and then wising up. The stakes are so high that if we fall into this at this particular time, we might just drive ourselves off a cliff.


    The Law of Mad Science

    Jordan: There's this law of mad science—it takes high-minded thinking to build the science, pass it on, and eventually build nuclear weapons and AI.

    Tim: But once the artifacts are built, it doesn't require that level of high-mindedness to use them. The low-minded entity can take advantage and wreak havoc.

    Jordan: You have all these stories—the Bronze Age Collapse, the fall of the Roman Empire, every empire in history. What takes centuries to build can be literally burned down.

    Tim: The ultimate example is a library that's centuries in the making, burned down in one hour.

    Jordan: And we see this in our toddlers too. We spend hours building a Lego thing and they're like—

    Tim: Yes! "Can you build it again?"

    Jordan: Exactly.

    Tim: That's why we don't want our super-brains behaving like toddlers. Culture war makes us into a macro toddler—unimpressive, no foresight, no understanding of what it's doing. In the past, at least when a civilization like Carthage was burned to the ground, the species persisted and the planet was okay. Now, if we do this wrong, it could be the end for all humans.

    And even without the existential risks—even without AI or nukes—just a coup that installs a totalitarian dictator would be awful. It's been so good for so long that we think this is normal, but this is only how things are because people act like adults, generation after generation, to uphold it. It's like any system: stop maintaining it for a month and it deteriorates. The water runs and the heat turns on and the streetlights work only because there's a constant frenzy of people maintaining it all. The default in the universe is entropy. I just want people to feel a little more fear. Thank God for this society—let's do everything we can to maintain it.


    Tribalism: High-Rung vs. Low-Rung

    Tim: There are different forms of tribalism. The bad kind dehumanizes the other, enforces conformity within your ranks, doesn't change its mind, is rigid and small-minded and shortsighted—beating the other team is all it thinks about. I don't think anything good comes from that. Maybe in a very rare case—like toppling a totalitarian dictator—you want people in that mode. But in a society like modern America, nothing good comes from it.

    The high-minded kind says: these aren't evil people. They're not subhuman. They're caught up in something bad. The enemy is the mind virus creating this behavior, not the people. And you continue to challenge your own beliefs, surround yourself with disagreement, and be willing to change your mind.

    The telltale sign of bad tribalism is rank hypocrisy. You had the exact opposite reaction when it was the other people doing the same thing. High-rung thinking sticks with principles—if your group starts betraying those principles, it's easy to say, "They're not my people anymore." Low-rung thinking doesn't care about principles. All that matters is the tribe.

    Jordan: This ties into a question that came in about people who get really successful and then get surrounded by people who don't challenge them. I've seen this happen to a lot of self-help people, gurus, spiritual leaders—the only people they attract are people who think like them, and they end up in really subtle hypocrisy.

    Tim: It's like a helium balloon. I have this graph I like to use: conviction on one axis, knowledge on the other. You want to be on the dotted line—your conviction should match how much you actually know. When you're in an echo chamber, you just drift up into unearned conviction. It's like junk food for the mind—the primitive mind just wants to identify with certain ideas and feel right. Just like you have to work hard not to eat too much sugar, you have to work hard not to drift into the arrogance zone where you have strongly felt, weakly supported beliefs.

    If you surround yourself with people who disagree for sport, who love to call out bias and hypocrisy—that's like having a kitchen full of healthy food. I'm on a bunch of text threads where there's nothing my friends love more than catching me being biased. We do it to each other. When someone puts out an opinion about a big news event, someone else will just disagree for sport. Because when everyone agrees, it's boring. And that environment trains your mind to always pause: Am I being biased? Am I doing this?

    In an echo chamber, no one calls anyone out on bias. No one can even see it.

    Jordan: I saw this a lot during the pandemic. I'm surrounded by people with different views on Trump—some neutral, some positive, some negative. The people who were really negative couldn't understand how a smart, caring person could possibly vote for him. They literally couldn't see it, because they didn't talk to anybody who thought that way.

    Tim: All they were seeing was the media-portrayed caricature—the worst version of the other side. The media sensationalizes the worst things those people do. And likewise for the other side.

    What I like about my text threads is it's not that we always disagree. It might be five people who can't stand Trump and one who likes him. In a low-rung environment, that person wouldn't even say it—they'd get the cold shoulder, maybe ostracized. In a high-rung environment, it's almost extra fun being the one person who disagrees, because you're not going to make people angry at you personally. It's like, "Here's a fun game—okay, tell me why you like Trump. Try to convince me." People push back on the substance, but nobody's mad. It can be a heated argument, but it's not a fight.

    I call the high-rung environment the "idea lab"—where ideas are treated like science experiments—and the low-rung one is the echo chamber.


    Audience Capture, Conformity, and Independent Thinking

    Tim: We evolved in environments where there was basically one leader and lots of followers. Most of us have this inclination to follow, to please, to fit in. That's not bad in a lot of situations—it can make you a great employee. But if that slider is up too high, you start conforming with really bad things that don't fit with your principles. You forget who you are and what your principles are, because you're entirely focused on being part of the group. That's not good. I don't think it makes you a happy person either—when you get to that level, you feel a deep lack of confidence.

    Jordan: Especially in a modern world. It may be that if we lived in a time where conformity was the way, you'd have security and confidence. But we're in a much more complex world.

    Tim: A long time ago, if you challenged the tribe, you'd be out and you'd starve. That's still baked into us, but it doesn't make sense today. We can be a little more courageous. A little more independent.


    The Future: Orienting Before Opining

    Jordan: You've done some really cool research on the future. How do you relate to it? Do you feel like you should be telling people what to do, or is your role more about naming things?

    Tim: My role with the future—and it'll continue to be this way—is to play the role of a friend at a table with other friends, except I just spent three weeks reading about this topic that everyone should know about but doesn't yet. And now I can be like, "Here's the situation." Not to say, "Here's the answer," but—especially with the future, nobody knows the answers. There are a lot of very interesting, strong opinions that disagree with each other.

    I don't think it's above my pay grade, but it's not the crux of what I'm writing. What's more valuable is the steelman: What do they think? What's the best version of what they think? Before that, even: what is this topic? What's the history of it? How did we get here? What are people scared of? What could go wrong? What could go right? Just orient people.

    Especially with AI—that one's almost less needed because it's so in the news. But there are 20 different topics within biotech, energy, transportation, cosmology—all of these areas of the future being built right now. Explain what's going on, talk about the science and engineering, use some imagination—imagine what life could be like if things go well versus badly. I actually have fiction vignettes in the book to do that. And then: what do the skeptics say? What do the bullish people say?

    And it should be fun along the way. My goal is that everything is either interesting or funny or both. If I can't hook people, it won't reach them. When I've succeeded, I think I've helped orient people—given them a clear mental model of something they didn't have before, so they can understand the news headlines better and pass that information forward. Awareness is step one. If people aren't oriented, nothing else matters. And the worst part is when people start having tribal opinions on something like AI before they're even oriented—they don't know what they're talking about, so they just follow what "their people" say. We don't want that. We want awareness first. Then opinions can start.


    Predicting the Unpredictable

    Jordan: It seems like we're in almost a Cambrian explosion of futurism. We really can't predict where things will be in a pretty short time.

    Tim: When the internet started, were people really predicting Uber? When electricity first came around, did people predict movies, telecommunications, radio, TV? When the car came around, were people thinking about Walmart, suburbia, McDonald's? There are always good and bad unintended consequences. You can't connect the dots from early internet to Uber—you need smartphones, apps, four or five intermediate steps. And we're not good at connecting more than maybe one dot forward. In 2035, it might be seven dots ahead—you can't do that.

    Thousands of smart people making predictions—someone will seem like a genius, but a broken clock is right twice a day. Predicting the future is worth it for playing out different scenarios so you have the proper level of "holy shit" about the stakes.

    Jordan: With cars, for example—if you were in the 1880s and you could do the math on how much highway infrastructure would be needed, you'd say, "There's no way." And yet here we are.

    Tim: People say right now, "There's no way you'll get a million people on Mars—you can't even breathe the atmosphere." "There's no way people will live to 500—nobody's lived past 120." Not all of those will come true. But I never say never to any of them. The thought experiment is: take George Washington here for a day and show him around. Or go back to the 1700s and describe today's world—they wouldn't believe you. They didn't even have electricity, so something like the internet or social media wouldn't even be comprehensible.

    And we might see that level of progress in our lifetimes. We could go forward in a time machine to a later point in our own lives and be so blown away we wouldn't even understand it.


    Parenting in a Rapidly Changing World

    Jordan: How do you parent with this?

    Tim: A toddler and a baby? Right now it's simple: I'm just spending time with them, loving them, making them laugh. Later, I want to encourage independent thinking, confidence in learning, and a habit of being willing to change your mind. High-rung thinking, basically.

    The faster the world moves, the more important high-rung thinking becomes—for keeping yourself safe, for succeeding, for achieving what you want in a world that's different than it was three months ago. You need confident, independent reasoning skills. If all you can do is look at conventional wisdom and what people around you are doing, that's going to lag behind. It's always the independent thinkers who figure things out first.

    I want them to be people who can independently come to conclusions, and then when conventional wisdom disagrees, to cautiously trust their own viewpoints—putting conventional wisdom in as a piece of information. "Okay, nobody else thinks this—maybe I'm wrong, I should keep looking." Not "Everyone's lying to me, they're all wrong, I'm smarter than them." But also not "Nobody else thinks this, so I must be wrong." Independent reasoning of a fairly intelligent person is so often smarter than conventional wisdom, which just lags and is slow.

    I want them reasoning from first principles about what they should do from 18 to 22, not necessarily just doing what everyone else is doing.

    Jordan: There's an interesting tension there. Humanity is so much smarter than any given human—partly because of conventions. But any individual needs to be able to think for themselves, while also drawing from the collective.

    Tim: There's a balance. Your brain is doing a dance with conventional wisdom. Sometimes you're going to be smarter than it. Sometimes it's actually wise and you weren't listening. Try to get better at telling the difference, without having complete fealty to either one. But when in doubt, trust your own reasoning.


    Initiation Ceremonies and Growing Up

    Jordan: I think a lot about initiation ceremonies—how we don't have them and what happens when people don't have them.

    Tim: What do you mean by initiation ceremonies?

    Jordan: For most of history, tribes had really intense rituals. There's the classic one where you stick your hand in a glove of bullet ants. There are tribes where they'd lock young men in a hut for 30 days with deeply unpleasant rituals. Or the Navajo, where you'd take so much of a local psychoactive that you literally couldn't remember your own name, and then you'd get a new name and be a new person—an adult with different responsibilities.

    I don't want my kids to go through anything like that, but there's a pattern: some meaningful landmark that says you're not a child anymore.

    Tim: I think there are some nice things about the fact that we have 25-year-old overgrown children in our society. There's something nice about getting to be curious, exploratory, and immature for longer. I don't think we want every 13-year-old to have that dead-serious face from an 1880 photograph. Kids should explore and feel like kids until they're 18 to 22.

    But there should be some moment when you start to say, "I need to be better than I was." Political tribalism—I want to encourage people to think of that as really silly, childish behavior. Once you hit your mid-twenties, it should be viewed as embarrassing. The way people might look down on a 28-year-old living in their parents' house and not making any money—I think a 28-year-old being super culture-warry on X or TikTok should seem like that. Right now, you have boomers, 40-year-olds, 30-year-olds throughout society acting like children. We have a lack of shame around it.


    The Right Kind of Shame

    Jordan: One thing I like about what you're saying is that you're not afraid of the power of shame. I think Brené Brown introduced a really helpful awareness of how shame can be toxic—the difference between guilt and shame. But people have taken that too far to say shame is always bad. I'm like, no—shame is really helpful for getting us out of narcissism.

    Tim: I think there's a spectrum. On the far end, "I am a fundamentally worthless, bad person"—we don't want that. Move up the spectrum: "I am acting like a bad person. I'm better than this." Or even just embarrassment—the feeling of "I don't want people to know I was acting like this." I think we need that. When you go to full shame, where it's just you, period, there's no incentive to improve—it's not even a productive emotion.

    Jordan: The best shame is actually a connection force. It brings you into contact with the larger world. The worst shame is deeply isolating.

    Tim: Just like hypocrisy—someone says "Fuck X group of people," then someone else says the same thing about their own ingroup and they call that person a bigot. They should look at those two things and feel embarrassed. It should feel like your pants fell down in public. Within an echo chamber, there's no mechanism for that negative feedback. And this applies to all kinds of things—how you treat your significant other, your kids, how you are at work, how you treat your own body.


    The Stakes and the Mission

    Jordan: It makes sense you keep bringing it back to this. It's deep in the heart of the UpTrust mission—it's upstream of so many things happening societally right now.

    Tim: Yeah, I'm rooting really hard for you guys. When you have this discussion, it highlights how important the company is. Some customers might just see it as a nicer social network. But if you zoom out, there are a bunch of forces dragging society down, and this is one that's trying to do the opposite at a time when it matters. Really a lot.


    The Far Future: How Good Could It Get?

    Jordan: With the futurism—have you come across people talking about what happens if we actually succeed? If something like UpTrust or Balaji-style network states are successful at creating a coherent planetary nervous system? What are the extreme far futures of humanity actually managing to coordinate?

    Tim: The bad possibilities are easier, because there are just a few attractor states: permanent dictatorship, extinction, societal collapse that sets you back to the Stone Age. What's harder and more fun is how good things could get.

    I think we could be living in a world where people look back at today and say, "I cannot believe people ever lived like that"—the way we'd look at peasants under a really oppressive dictatorship in the year 800. They'll travel anywhere in the world in four hours on jets faster than we can imagine. People will live as long as they want to. Some will choose not to take any interventions—the hippies and some others will say, "I want to die at 85, and that's my choice." Others will want to live to 3,000. Whether it's mastering biology to de-age cells and refresh them and back up consciousness, or a robot body, or uploading into a virtual world.

    I could see the Fermi Paradox being resolved by really advanced species not showing signs because they're all living in virtual worlds. Living in the physical world is how they see living in caves—we don't live outside like our ancestors did; maybe they don't live in the physical world.

    Cultivated meat—after a bunch of resistance and pushback—I think will sweep the world. Billions of people eating really healthy, affordable meat made of real animal cells, but not from an animal. AI, if it ended up well-aligned, is facilitating many of these things. Like the Culture novels by Iain Banks, where AI runs everything and we're happy about it—"Thank God we don't have to run the world anymore. It was like Lord of the Flies with a bunch of children trying to manage it."

    No scarcity. Space habitats. People looking back at today: you still got sick? People died before they were ready? You lost loved ones before you were ready? Transportation was so slow? Distance still mattered? Pain, disease, poverty—just long gone. Climate change fears—cracked, solved, long ago.

    And the craziest part is I don't think you have to go 250 years—like George Washington to today. Things are moving so quickly that if it's good, it's going to be good in our lifetimes. And if we don't see it, it's probably because things went really bad and we ended up in a nightmare—where instead of "Imagine people used to live in 2026," we'd say, "We didn't know how good we had it."

    There are also people in the middle ground who think we'll slowly decline or slowly get better, and that 50 years from now will be a more technological—maybe more depressed—version of today's world. They could be right too. This could all be one big dot-com bubble.

    Jordan: Super cool. I don't have any great final question—I never expected to end up doing long-form interviews. It just happened.

    Tim: You're doing a lot of really amazing interviews. I like the little world you're building over there. It's great.

    Jordan: Thank you. Really appreciate it. Super fun. I look forward to more.



    Show Notes & References

    People Mentioned

    • Tim Urban — Writer, illustrator, and co-founder of Wait But Why, a long-form, stick-figure-illustrated blog with over 600,000 subscribers. Author of What's Our Problem? A Self-Help Book for Societies (2023). His 2016 TED talk on procrastination is one of the most-watched TED talks in history with over 74 million views. Website: waitbutwhy.com
    • Jonathan Haidt — Social psychologist at NYU Stern School of Business, author of The Righteous Mind: Why Good People Are Divided by Politics and Religion (2012) and co-author of The Coddling of the American Mind (2018). Referenced here for his research on social media's impact on political discourse and how platform design changes (like buttons, retweets) shifted online culture.
    • Jon Ronson — Welsh journalist and author. Referenced for his book So You've Been Publicly Shamed (2015) and his TED talk on online shaming. He described early social media as a "humming place" that later became hostile.
    • David Christian — Historian, creator of the "Big History" framework, and author of Origin Story: A Big History of Everything (2018). Pioneered the academic field of Big History, which traces history from the Big Bang to the present. His TED talk has been widely viewed.
    • Ken Wilber — Philosopher and creator of Integral Theory, which maps stages of human development (egocentric, ethnocentric, world-centric, etc.). Author of A Theory of Everything (2000), Sex, Ecology, Spirituality (1995), and many others. Referenced by Jordan as offering a more complex framework similar to Tim's ladder model.
    • Steven Pinker — Cognitive psychologist at Harvard, author of Enlightenment Now (2018) and The Better Angels of Our Nature (2011). Referenced as representing the rational/objective worldview within the Integral framework discussion.
    • Iain Banks — Scottish novelist (1954–2013), author of the Culture series of science fiction novels, which depict a post-scarcity, AI-governed utopia. Notable titles include Consider Phlebas (1987), The Player of Games (1988), and Use of Weapons (1990). Referenced by Tim as a vivid fictional depiction of what a positive AI-aligned future could look like.
    • Brené Brown — Research professor and author known for her work on vulnerability, shame, and empathy. Her books include Daring Greatly (2012) and The Gifts of Imperfection (2010). Referenced in the discussion of how her distinction between guilt and shame has been taken too far by some to mean that shame is always bad.
    • Balaji Srinivasan — Entrepreneur, author of The Network State: How to Start a New Country (2022). Referenced by Jordan in discussing the concept of technology-enabled "network states" and planetary coordination.

    Books Referenced

    • What's Our Problem? A Self-Help Book for Societies — Tim Urban (2023). Uses the "ladder" framework (high-rung vs. low-rung thinking) to analyze tribalism, political discourse, and the collective intelligence of societies. Features the concept of the "idea lab" vs. the "echo chamber" and the human "super-brain."
    • Origin Story: A Big History of Everything — David Christian (2018, Little, Brown and Company). Traces the history of the universe from the Big Bang to the present using the Big History framework.
    • A Theory of Everything: An Integral Vision for Business, Politics, Science, and Spirituality — Ken Wilber (2000, Shambhala). Introduces the Integral framework mapping stages of human development.
    • So You've Been Publicly Shamed — Jon Ronson (2015, Riverhead Books). Examines the culture of online shaming and its consequences.
    • The Culture series — Iain M. Banks (1987–2012). A series of science fiction novels depicting a post-scarcity civilization governed by benevolent artificial superintelligences ("Minds"). Key titles: Consider Phlebas, The Player of Games, Use of Weapons, Excession, Look to Windward.
    • The Network State: How to Start a New Country — Balaji Srinivasan (2022). Proposes technology-enabled governance structures.

    Key Concepts & Topics

    • Big History — An academic field pioneered by David Christian that examines history on the largest possible timescales, from the Big Bang to the present, identifying key "thresholds" of increasing complexity: the origin of stars, the origin of life, multicellularity, the emergence of language and collective learning, etc.
    • The Fermi Paradox — The apparent contradiction between the high probability of extraterrestrial civilizations existing and the lack of evidence for them. Tim discusses the "Great Filter" hypothesis—the idea that there may be some extremely unlikely evolutionary step that almost no species passes, which would explain why we don't observe alien civilizations.
    • The Great Filter — A hypothesized barrier in the development of life that prevents most civilizations from becoming interstellar. Candidate barriers include the origin of life, the jump from prokaryotic to eukaryotic cells, multicellularity, and the development of intelligence or technology.
    • Emergent Properties — The phenomenon where complex systems exhibit properties that their individual components don't possess. Tim uses this to describe how consciousness emerges from non-conscious cells, and how collective intelligence emerges from individual humans.
    • Protein Micro-Machines — Tim's term for the molecular machinery within cells—enzymes, motor proteins, ribosomes, and other protein complexes that carry out cellular functions through purely physical and chemical processes, despite not being "alive" in themselves.
    • The Primitive Mind vs. The Higher Mind — Tim's framework from What's Our Problem? The "primitive mind" refers to evolved instincts for survival, reproduction, tribal belonging, and status-seeking. The "higher mind" is our capacity for reasoning, empathy, long-term thinking, and principled behavior. Social media, in Tim's view, was initially designed to engage the higher mind but was reconfigured (through likes, retweets, algorithmic feeds) to appeal to the primitive mind.
    • High-Rung vs. Low-Rung Thinking — Tim's "ladder" framework. High-rung thinking treats ideas like science experiments—testing them, being willing to change your mind, engaging with disagreement charitably. Low-rung thinking treats ideas like sacred dogma—enforcing conformity, dehumanizing dissenters, and being driven by tribal loyalty rather than principles.
    • The Idea Lab vs. The Echo Chamber — Tim's terms for high-rung and low-rung social environments. In an idea lab, disagreement is welcomed and even fun. In an echo chamber, dissent is punished and everyone drifts into unearned conviction.
    • The Super-Brain / Super-Organism — Tim's concept that human civilization functions as a collective intelligence greater than any individual. When functioning in "high-rung" mode, this super-organism can solve enormous problems. When overtaken by tribalism ("low-rung" mode), it becomes a "macro toddler" that destroys what took centuries to build.
    • The Law of Mad Science — The observation that it takes high-minded, collaborative thinking to create powerful technologies (nuclear weapons, AI), but it doesn't require that same level of wisdom to use or misuse them—meaning the artifacts of intelligence can be wielded by tribal or destructive forces.
    • Conviction-Knowledge Graph — Tim's visualization of the relationship between how strongly you believe something and how much evidence supports it. The goal is to stay on the "dotted line" where conviction matches knowledge. Echo chambers push people upward into unearned conviction (strongly held, weakly supported beliefs).
    • Attractor States — Tim's term for the few stable endpoints that civilizations might converge on: permanent dictatorship, extinction, societal collapse, or various forms of flourishing.
    • Cultivated Meat — Meat grown from real animal cells in a lab rather than from a slaughtered animal. Tim predicts this technology will eventually sweep the world, providing affordable, healthy meat without animal suffering.
    • The Fermi Paradox and Virtual Worlds — Tim's hypothesis that advanced civilizations might resolve the Fermi Paradox by living primarily in virtual worlds, making their physical presence in the universe invisible—analogous to how we moved from living outdoors to living indoors.
    • UpTrust — Jordan Myska Allen's trust-based social media platform, designed to algorithmically prioritize credibility and nuance over engagement bait. Tim describes it as "trying to reinvent health food" in a world addicted to social media junk food.
    • Relatefulness — A community and practice co-founded by Jordan Myska Allen. Referenced when Jordan discusses cultivating group cultures that allow for genuine disagreement.

     

    jordanSA•...
    Tim: I think there's a spectrum. On the far end, "I am a fundamentally worthless, bad person"—we don't want that. Move up the spectrum: "I am acting like a bad person....
    psychology
    mental health
    social psychology
    emotions
    moral psychology
    Comments
    0
  • UpTrust Admin avatar

    Transcript from Greg Lukianoff interview: Free Speech, Stoicism, AI and more. A Conversation with Greg Lukianoff Hosted by Jordan Myska Allen for UpTrust.

    Here's the interview on YouTube 

     


    here's a transcript

    Introduction

    Jordan: Greg, welcome. Happy to be here. For those who don't know, Greg runs FIRE, the Foundation for Individual Rights and Expression. In my mind, it's the spiritual successor to the ACLU when the ACLU decided to get political instead of sticking with the mission. I've heard you on a few different podcasts and I really respect the way you are committed. There's a certain kind of integrity that I think you embody—you're like, "Look, this is what we're about. This is what we want to stand for. And we don't really care if you like it or don't like it. We don't care what side of the politics you're on." We're going to do what we think is right. I think we just need more of that. That's why I'm excited to share your thoughts and what you've done. You also have just so much experience in the trenches for decades now.

    Greg: Thanks, Jordan. I just wrote something on my Substack, The Eternally Radical Idea, about how I understand why more organizations don't take a principled position on things like freedom of speech. One, you can make a lot more money if you decide to make one side happy or the other—this happens on both the left and the right. Two, it's less exhausting because you have one consistent set of fans and one consistent set of enemies. But if you're going to be principled about it, there are people you really like on the left who hate your guts on half the cases, and some people on the right who you might've thought were allies who turn on you if you're on the opposite side of Trump, for example. It takes a lot of hard work.

    I wrote about my mentor, Harvey Silverglate, the co-founder of FIRE—a great old Brooklyn civil libertarian who lives up in Boston now. I told him how exhausting this was, and he told me, "Greg, in this life you can only really care about what ten people in the world think of you. Pick those ten carefully." And I was like, wow—that's the best advice I've ever gotten in my life.

    Jordan: I love that. I think about it because a lot of people in the personal growth field have this trope of being individuated, and it's true and good, but I actually do want those ten people. I don't want to care what the eleventh through infinity think, but I really do care—and want to care—about what those ten people think. It matters to be shaped by our relationships and have people hold us accountable.

    Greg: Yeah. I met you at Liv Boeree's podcast, for example. And she's someone who, even when I disagree with her, I 100% always take her opinion seriously, because I know she comes at it very honestly and very critically.


    Free Speech in the Age of AI

    Jordan: So let's dive in. For me, free speech—I grew up in Texas, basically as a civil libertarian, so free speech was always something I took for granted. Obviously something we need to defend and stand by. But even from that point of view, it's a little confusing in the age of AI. Does somebody have a right to clone my face and my voice and then put out fake stuff about me? Does somebody have a right to do it at scale with bots on millions of accounts? What do you think?

    Greg: The issues of AI are serious, just the same way the issues of social media are serious, as I talked about in my book with Jonathan Haidt, The Coddling of the American Mind. But the civil libertarian's role has got to be to explain to people: letting government actually make those decisions for you will not end the way you think it's going to end. It is a very dangerous path.

    The good news is that existing court decisions and existing jurisprudence are going to really help us understand the parameters of AI. Preexisting law actually really helps you. You do have a property interest in your image. If people are trying to deceive someone that it's actually you, that's fraud. If you create something that goes out and does that, you're liable for it. I think we underestimate the sophistication of American law. I run into this all the time with First Amendment law. People are like, "What about someone doing this or that?" And I'm like, that is already banned, ten times on a Sunday.

    For other things—if you're just talking about someone having highly offensive opinions—my whole point is epistemological. If you think you're safer from reality for not knowing what people really think, I've got really bad news for you.

    Jordan: Yeah. This has been the UpTrust position. Look, you have a right to believe whatever you want. The thing we're making a stand for is the process of how, out of the millions of things that could be put in your feed, who and what gets to decide that. We trust that people sort themselves out, and we set the algorithms up so that you're able to have conversations with people who think differently from you—in the way that's most likely to be heard by you, instead of the current situation where you're most likely to encounter the craziest opposite.


    Knowledge Creation, Conspiracy Theories, and Structured Friction

    Greg: I love what you're doing, because one thing people don't always understand is why the head of a free speech organization is always talking about knowledge creation. They're not loosely related—they're the same thing. Karl Popper's idea of conjectures and refutations is how we figure out what's not true, which turns out is the only way you actually get to truth. You don't get there directly. You just get to a cloud of probability around what might be true. It's frustrating to people who want more absolute universes, but sorry, that's the best we can do.

    But the other thing people really miss—and I'm a big evangelist for this—is that there's an information value in knowing what people think and why. It's not of slight importance; it's of the greatest importance. People will say, "What about conspiracy theories?" I said this in a TED talk—the one where I met Liv, because she ran the whole thing. I said, listen: lizard people who live under the Denver airport do not run the world. But knowing that your future husband thinks they do, or all of your neighbors think they do, or your president thinks they do, is incredibly important information to have. These kinds of misconceptions—and we're all filled with them, we're all going to learn we were filled with misconceptions that we thought were rock-solid truth twenty years from now—this is really important data to have about your world.

    But you also need to test it against reality. One thing I've been working on with the Cosmos Institute—and we're going to be doing more at FIRE too—is increasing structured friction against what we think is true. Honestly, I think right now we're sitting on a giant pile of supposed knowledge that is a lot of weak research, a lot of bad scientific habits, a lot of misconceptions, a lot of trees built on rotten roots. But also everything from genuine data falsification to actual misconduct. The Zimbardo Stanford Prison Experiment—everything tells me that was complete fraud.

    And then there's the bias you end up having in highly politically homogeneous situations, where weak research that agrees with what's popular locally on campus gets through, and the stuff that isn't studied in the first place—and when it is, it's subjected to entirely different standards. So what you're doing, and what people like you are doing, I think is so important. Understanding the world as it is—this is a never-ending, arduous process. But it's worth it.

    Jordan: Yeah. Psychologically, I think that is super critical, what you just said. To be able to contend with the reality that there are two things that are just unpopular for the psyche, for the ego. One is being in constant relationship with uncertainty and mystery.

    Greg: Yes.

    Jordan: And the other is that it's not always comfortable.

    Greg: Amen.


    Race, IQ, and Taboo Research

    Jordan: One of the things that typifies what you're saying about knowledge creation—and why it matters—is a really good example for me. It has to do with race and IQ. Here's a topic that, if we can open it up and be honest about it, we find a couple of surprising things that don't fit neatly into the right or left or the racist/anti-racist narrative. My best understanding—I'm not a geneticist—is that there is a very tight correlation between race and IQ, but it changes within a couple of generations. So it's both nature and nurture, mostly nurture. I think actually being able to say, "Look, this is a real thing that shows up in the IQ data, and that's a problem," is an argument for the left: we need to do something structurally, because we messed this up. We made it so that, on average, somebody born from a particular racial background has a leg down. That's now encoded in their genes. But it doesn't have to always be the case.

    Greg: Yeah, and talking about these difficult things is really crucial, or else you end up with a distorted picture of the world. I think about the Larry Summers situation at Harvard—it was such a bad picture of things to come. People have all their issues with Larry for different reasons, but the speech he actually gave—which was misrepresented to this day in the media as him saying women are not as smart as men—was actually him asking, "Why are there fewer women in some of these really intensive theoretical physics-type fields?" His primary argument was that the life is extremely unpleasant: it's very isolating, it doesn't really involve interacting with people. Men tend to be more drawn to that. And women oftentimes want to have families, want to be in the world, want to interact with people—which is pretty great, actually, in its own way.

    The only point he made that got people angry was the idea that there are higher tail distributions for men—meaning there are higher numbers of men who are very low IQ, and somewhat higher numbers of men who are multiple standard deviations above the mean. That really only starts to matter when you get into the deepest theoretical physics. There's lots of research on this, and we can't pretend there's not. But it got treated as blasphemy. And that just can't be allowed in a situation where you're trying to get to truth. Blasphemy and taboos are the enemy of truth discovery.


    Social Pressure, Elites, and the Crisis of Trust

    Jordan: How do we deal with that? There are these questions of power—you work on the legal and governmental front. There's a question of what happens when the censor is the algorithm. But then there's this other thing that's more about culture and psychology than a legal thing.

    Greg: I'm working on a book tentatively called The Neuroscience of Knowing—but it might eventually be called something like Reality Test. It's with a neuroscientist, and we're basically trying to make the argument that we've underestimated—and this is partially First Amendment people's fault, I'll take blame here—the role of social pressure on distorting what science and research do.

    Essentially, if you think you're going to get fired—FIRE fights this all the time, and very successfully, I should stress. People get in trouble for what they say, we come to their defense, and we win an awful lot. But what's much harder to fight is people being afraid they're going to be a pariah if they even do certain kinds of research, especially on the hottest-button issues of the day.

    Here's the big miscalculation of American elites. And to be clear, when I say "elites," I basically mean people who are opinion makers, big business people, people in government, people in academia—it doesn't mean they're special or good. I actually think our elites need a lot of work. But one of the things they did—and I watched this in action—was start acting like they were automatically owed deference on their expertise, and that it couldn't be frittered away. Like, the public would always look to the experts and say, "We trust you." But wait a second: when you're highly politically homogeneous, there's good research on this, people don't really trust you. And when you start having situations where professor after professor gets in trouble for asking the wrong questions or being on the wrong side of a hot-button issue—which oftentimes is actually the side of most of the rest of the public—they're never going to trust you again.

    What you need to do is create environments where people are insulated from the social backlash to conform with societal taboos, but where they have a reputational interest in their integrity. If you think it's going to ruin your life to say something that might actually be technically, scientifically true, you're probably not going to say it. But if you're in an environment where the whole thing is about truth-seeking, that can change things.

    Unfortunately, a lot of the worst activists on campus now don't even believe truth exists. And whenever I hear this argument, I'm like, "Listen, so do you believe everything is true?" And they're like, "Of course I don't believe everything is true. Some things are false." I'm like, "Then you believe in truth—because the only way we know truth is by figuring out what isn't true."

    Jordan: If something isn't true, that's a truth.

    Greg: That's one of the most important truths you can know. There's an old joke about Edison—when he worked so hard on various filaments for the light bulb and it took him forever to get there. Someone said, "So you've gone this far and learned nothing?" And he said, "No, actually I've learned about 10,000 different filaments that don't work in a light bulb." Very important knowledge to have.


    Objectivity, Subjectivity, and Peer Review

    Jordan: It's interesting—I feel like we have to evolve our thinking in some way. On the one hand, we have these problems with science that you're talking about, and part of that is we pretend there's a view from nowhere. I love that we've gotten to the moon. I think being able to be objective about stuff matters and is real. But we have to remember that we're always bringing in a subject—there's no way to step outside of the subjective bias. And this is why we need people to push back: let's test these ideas, let's do peer review, let's have people challenge us and say, "You must be wrong about this," and we have to keep engaging that. But if you take that too far, it becomes, "There's no truth."

    Greg: Yes.

    Jordan: And that's clearly a truth claim, ironically. So that doesn't work.

    Greg: A pretty silly truth claim, really. Because otherwise, why are we arguing about anything at all?


    Structural Reform: Counter-Institutions, AI, and Education

    Greg: I do slightly disagree with what we need to do now. If we had a profession that was really committed to this ethos of truth-seeking—feelings be damned, my taboos could all be wrong—that would be better. But I tend to take lessons from the founding fathers and from people like Montesquieu, who believed in separated, divided government—separation of powers, as it's now called. And people like James Madison, who were really serious about human nature. Listen, we're not going to fix human nature. People are going to be biased. They're going to be sure they're always right about everything until they die. So what you need are structures that actually reduce that problem. You can create them as long as they have different incentives, as long as they're adversarial—not in the sense that they hate each other, but in the sense that they're trying to disprove each other.

    Jordan: Competitive in the best way.

    Greg: Yeah. You can actually achieve this as long as you create the right structure. Not by what a lot of campus presidents are doing right now—since they realize a lot of people don't trust them, if they're being intellectually honest—which is saying, "Everyone's going to pinky-swear to be better." No. That's not going to work. That's not the way bias works. You're the first people to explain what bias is, but you just never apply it to yourselves, for goodness' sake. The way you get through it is better structure.

    Jordan: Yeah. And the incentives really matter. This is what I've been focusing on for the past many years—how do we set up the system and structure of online dialogue to reward attention in the right way? There's nothing wrong with fighting for attention, but right now the way you win is by being outrageous and picking a polarized side and leaning into it as hard as you can. So we're like, "No, we've got to change that." I don't know how to change the incentives with academia. It's really tricky.

    Greg: I think there are a couple of things I'm pretty bullish about. One: you need AI designed to comb through as much literature as possible and figure out who's falsifying data, who's plagiarizing—doing all the obvious immoral stuff. Falsified data is actually surprisingly easy to find. You just look for data that follows a pattern that people think in their head looks random, but we're terrible at that. Our brains aren't good at coming up with genuinely random things. So those are easy to flag, at least. Of course, in rare cases you might actually have data that comes out looking like that—but the idea is it would flag it so you could have a person review it.

    But then there's stuff where you can have that same AI look for what questions were asked, what things were based on nonsense. There are so many things that are based on—I always go back to Zimbardo. Zimbardo told his students to act like jerks, like they did in Cool Hand Luke. Is that a real experiment, or did you put on a play? He put on a play and then said, "Oh, human nature's terrible—I'm proving that we're awful and we'll immediately become evil as soon as we're given power." That's one of the reasons why it doesn't replicate.

    Interestingly, Milgram's experiments actually do replicate. But if we can start finding those rotten branches, you can start figuring out what isn't true. And here's an important thing: we will probably also find hidden gems—pieces of research done by some super-studious scholar back in 1911, where we discover, "Actually, turns out this person was right." You can also do foreign-language research much more easily and compare across languages. So I definitely think AI—and it can't just be one silicon pope, it's got to be multiple designs—needs to be turned on the entire corpus of human knowledge.

    But I also think you need institutions—what I call counter-institutions. Take over one of these failing colleges, make it Replication University. Have the entire ethos be: our job is to kick the tires of what we think is true. Using a combination of AI and manually trying to figure out what's true. That's a way where you're not relying on pinky-swears; you're relying on institutions to tear down what isn't true and find what is true.

    Jordan: Totally. And there's a way we can culturally support that by just saying, "This is cool. This is a neat thing." There are a lot of pressures—some economic, some cultural. Getting people behind that and saying, "This is a valuable thing" matters.


    The Signaling Function of Universities

    Greg: Universities also have other things that allow them to be successful. A huge part of it is signaling: if you're already smart and hardworking, all these schools have to do is not ruin you at the end of four years, and you're still going to be smart and hardworking. So congratulations. I don't want to call it a scam, but it sometimes seems a little bit like it. But there's also prestige, which is very important to humans and really underestimated, and networking, which also really matters.

    Bryan Caplan wrote a book called The Case Against Education where he also said that being able to complete college tells employers that you're conventional enough—essentially, that you'll sit in your desk and do your job.

    Jordan: That's true. And it's really bad for free speech. It's indicative of why we're having these issues and why you guys started and focused on campuses.

    Greg: 100%. But what you need is to be able to create something that is rigorous, that is tailored to people's individual learning speed, their weaknesses and strengths—and if you add the networking aspect and the prestige aspect, then you're really talking. I've been trying to think through ways you could do this for the self-starter, because when you look at the data, there's a top 10% of almost every college in the country that I would describe as essentially unstoppable. They will figure out a way to succeed in life, period. And sometimes these same people end up in big debt because of some of these schools. I think that's outrageous. I think there are much better ways to do this—much more seriously, much less politically, without imbuing them with a ton of societal taboos they're not allowed to talk about or disagree with, and much more inexpensively.

    Jordan: It's interesting—this just clicked for me using developmental psychology. In a lot of ways, universities traditionally have been like, "Yes, seek knowledge," but really they want to train you to be a socialized conformist—a third-order, in the Kegan developmental psychology model, type of person. What we actually want is for you to be autonomous, self-authoring, self-starting. And we don't really have institutions that support that.

    Greg: Yeah. It reminds me very much of Plato's Republic. A lot of people that I really admire tend to take it as Plato literally doing a thought experiment about a perfect society, even if there's lots of satire and wisdom in it. But I'm open to the idea that it's primarily a thought experiment, not really a plan for government. If you do take it seriously, though, there's this idea that you have to have an elite and then you have to lie to the people—you have to tell the myths about them being different races: some for working, some for fighting, some for thinking. It's all this terrible stuff. But Plato thought he was teaching them good things—stuff that would be better for societal harmony and prosperity.

    Every society does the same thing. We're terrified of the idea of having elites that think for themselves, and we do a lot of indoctrination. A lot of the elite colleges consciously favor people who emphasize activism in their materials. And I defend activism all day long—but at the same time, it is a certainty mindset. It's not a scholarly "I could be wrong" mindset. And that "I could be wrong" mindset is one of the reasons free speech matters so much. Free speech is necessary no matter what—but it gets infinitely more useful if you have the willingness to take seriously the possibility you might be wrong and actually hear other people out. Then it becomes this incredible innovation for truth-seeking, for connection, for growth.


    The Three Great Untruths

    Jordan: I went to Rice University, graduated in 2008, so it was still a place where I was confronted with a lot of different ideas.

    Greg: I've been impressed with Rice overall.

    Jordan: I loved it. It may have changed—it's been almost twenty years now. But I'm curious: I also love what you and Jonathan Haidt did with the three great untruths. It's coming up in my mind because one of the things I unofficially learned at Rice was how to disagree and still be friends. And when I look at these great untruths, I get scared that we're losing that.

    Greg: Yeah. The three great untruths were something Jonathan Haidt and I were working on for the book Coddling of the American Mind. We got really deep into intersectionality and a lot of the philosophy behind it, and I said to him—not really jokingly—we're starting to write a book that I don't want to read.

    We wanted to really simplify the advice. The theory of the three great untruths was, one, my family advice theory: nobody's going to listen if you say, "Do exactly this thing." What they might listen to is, "Definitely don't do that thing." Negative advice—"Don't do the following"—is what most people are willing to learn. And we looked at things that disagreed with ancient wisdom, particularly Buddhist and Stoic thought, and things that modern psychology says are bad for you or will make you miserable.

    The three were:

    First: "What doesn't kill you makes you weaker." Terrible advice to give anyone if you want them to have a fully actualized life.

    Second: "Always trust your feelings." It sounds so nice, so cute, and it's absolutely terrible advice. Susan David, I think, said it well: your feelings are data, not directions. Oftentimes they're telling you things that are different than you think.

    Third: "Life is a battle between good people and evil people." While I actually do believe there is a small subset of humanity—particularly malignant narcissists or people who are sociopaths who are also sadists—that I think you could say are the secular equivalent of evil, most people aren't. And we're acting like everyone who disagrees with us is.

    Jordan: Those evil people aren't trying to advance any particular ideology—they're just using whatever party or thought system to get what they need.

    Greg: Exactly, yeah. They've got no empathy. And the idea is that we've come to think of the people we disagree with as just being evil, stupid, or probably both. That's a very nice compliment to pay yourself and the era you're living in, but it's also very foolish.

    Jordan: I'm really familiar with "always trust your feelings" because, before UpTrust, I ran—and I'm still deeply involved with—the Relatefulness community. In a lot of ways, we've borrowed the best of what we can find from personal growth and transpersonal psychology. And there's a meme that "always listen to your feelings" gets smuggled in. I think it's really innocent—we are not taught how to feel very well or what to do with feelings. It's getting better. I'm a young parent and I get all sorts of parenting books about this. But a lot of us just didn't grow up learning to do anything with feelings, so there's this counter-movement to fully embrace them.

    But the way I think it works is that feelings, thoughts, sensations—all of it can lie to us. All of it can be right. It's all data, and we have to learn how to sort through it. It's that uncertainty, that discomfort of there's nothing you can rest on all the time. You have to continuously face reality and be undone.

    Greg: Yeah. And think about it in yourself. A lot of times I find myself getting really angry, and it's almost always—not exclusively, but almost always—because I'm angry at myself about something. If I take a deep breath, I'm like, "Oh, that's right, because I feel like I messed something up, and that's why I'm getting really disproportionately mad about this."

    But also feelings like jealousy or sadness—when you actually trace them back. And this is something that campus activists really weaponized to go after people they didn't like: "This person's talk is making me uncomfortable. The fact that they're even here on campus is a threat to me"—or usually not to me, it's a threat to some other unnamed group. "So this person can't speak here." When you look at that, one, it's manipulative—it's trying to get to a political goal by making people feel bad. But also, a lot of times, why is it making you feel bad? Do you think this person is metaphysically evil? Do you think people are simple receptacles who'll hear what the person says and become contaminated? When you start examining those thoughts, you often come back to the conclusion that you made this argument because it would be successful, and that's the primary reason you made it.


    The Fourth Great Untruth and "High Decoupling"

    Jordan: I was curious—you've defended all sorts of people, and probably people that you really didn't like and didn't agree with. Do you ever find yourself having this transformative moment of, "Oh, I'm actually just like them"—being in relationship with someone you're initially disgusted by?

    Greg: I don't have that response as much because I don't associate the right of free speech with the content of speech as much. I take it so deeply for granted that sometimes you're defending jerks. But: are you safer for not knowing what they think? Is it better that you can just decide someone's a jerk and shut them up? Is it possible they might even be right about some of the things they're being schmucks about? All of these things I take so deeply for granted.

    But I will say one thing that's been genuinely surprising about my career. I'm one of those old school—I remember as a kid, I grew up as a first-generation kid in an immigrant neighborhood, hearing about the fact that in the United States—and a lot of us were people who fled totalitarianism or authoritarianism—there was this group of largely Jewish lawyers so principled that they were willing to defend the free speech rights even of Nazis. And I was like, that's amazing. That's completely unique in history—somebody really is defending the principle, no-means-no.

    So I was ready coming in, in 2001, to defend all sorts of unpopular speech and vile opinion. But working on campuses, how often I'm dealing with genuinely sweet people who are like, "I don't understand how anyone got mad at me about this. Where did this come from?" How often it's like: you have intentionally decided that you're going to misunderstand what this person said, because honestly, I think there's a power play going on. How often I'm defending actually nice people as a First Amendment lawyer is—I wouldn't say it's a pleasant surprise, because finding out that even nice, well-meaning people get in trouble a lot is actually terrible. But it is funny how rare it is that I'm like, "This person is just completely vile, but I'll defend them." A lot of people I'm defending were just talking the way every other American talks everywhere except on campus—and that's what got them canceled.

    Jordan: There's an attitude you have that I really respect and love. You somehow don't fall into this drama triangle of "there's a victim that I have to save and an oppressor." How do you stay so on-mission without getting caught in these cycles?

    Greg: I almost want to say—my dad's Russian, my mom's ethnically Irish. There's a lot of great literature that comes out of those two cultures. Also, I feel like novels are a wonderful way of teaching that things are not that simple. And most of my life, when I get in an argument—I used to get in lots of arguments in law school, unsurprisingly. My best friend actually put it together. He said, "Your argument is almost always the same. You're almost always arguing two things: one, you don't really know that, and two, it's not that simple." They were always epistemically humble things.

    But I was bullied as a kid, and after I got sick of being bullied, I started fighting. And I remember telling my dad this very proudly. I described myself as an anti-bully who would beat up bullies. And my father, who fled the Soviets, said, "Is that not just another kind of bully?" And I was like, he's totally right.

    So the idea that the bully is always a terrible person—I have known so many people who could be very cruel to others who were the ones suffering the most. And I've known some people who think of themselves as victims every day and are some of the most privileged people I've ever met. But at the same time, everybody is complex, everybody is deep. I do think there are malignant narcissists and a special subcategory of people you've got to be really careful about. But most of us aren't that. Embracing the complexity of what most human beings are—knowing that they generally mean well but they're also selfish—doesn't leave a lot of room for a simplistic story of heroes and villains where you just happen to be a hero.

    Jordan: I also love what I'm hearing. You did a follow-up book?

    Greg: I did a follow-up book called The Canceling of the American Mind with Rikki Schlott, this absolutely brilliant Gen Z young woman. I wanted to write it with her because she's amazing, but also because since so much of The Coddling of the American Mind is about Gen Z, it was really helpful to work with one—instead of just two Gen Xers writing about Gen Z.

    In that, we actually added a fourth great untruth, which sounds a lot like the third but isn't exactly the same: "All bad people only have bad opinions." Because if you look at the way we argue on social media, so much of it is trying to make this moral pollution argument—Pamela Paresky, I think, coined this term—the idea that if you're even close to someone who is bad, everything you say can then be discounted. Or if you've done bad things yourself. Well, first of all, everyone's done bad things. That's one of the nice insights from Christianity—and I'm an atheist, but I take my religion very seriously—that to a degree we're all fallen.

    The main argument is: I'm not going to address the substance of what this person is saying; I'm going to point out that they're a bad person and you shouldn't listen to them. In the TED talk I referenced earlier, I asked the entire audience to look at the person to their left, look them in their beautiful eyes, and say, "Just because I hate your guts doesn't mean you're wrong." They all did it, cracking up. But these things are not related. Werner von Braun was incredibly important—he got us to the moon, but he was a Nazi. He wasn't wrong on rocket propulsion. And Thomas Malthus was by all reports a very sweet and thoughtful man, but his ideas on overpopulation, being oversimplified, were used to justify some of the greatest crimes against humanity of the twentieth century.

    Jordan: The fourth great untruth—there's a term from my Rationalist Bay Area friends: "high decoupling." You can decouple the theories or ideas of Malthus or von Braun from the character. I think it's important for thinking and epistemics.

    Greg: Absolutely. As far as people I think were awful but weren't totally wrong: Jean-Jacques Rousseau—horrible person. Genuinely horrible, probably a malignant narcissist. I think he was wrong about the general will—I don't want to live in a society that's a dictatorship of the majority. However, his ideas on how to raise children? There's a lot of positive stuff in there, which I take very seriously.

    Karl Marx—horrible person. The more you learn about him, how racist he was, how nasty, how bad of a scholar he was in lots of ways. My inclination is to dismiss him. But I still go back and read him. I disagree with him for getting all sorts of other things wrong, but I think he was addressing real problems at a real time.

    And by the way, decoupling also helps you enjoy art. Some of the best songs were written by or sung by horrible people. Don't ruin the art by thinking too much about the artist.


    Historical Figures, Self-Hatred, and Human Complexity

    Jordan: It's really unhelpful to constantly have to—it's helpful to look at our historical figures and say, "Look, this person had slaves. That sucks. Be careful of that." But it's also helpful to be like, "They did some amazing stuff anyway," because it gives us a sense that we can do amazing things even though we've messed up a bunch of stuff.

    Greg: Yeah. Even though we're doing things right now that people will think we're horrible for doing, and we don't even necessarily know what those things are. As Liv Boeree pointed out, factory farming is probably one of those things we'll look back on and be like, "Ugh, that's a pretty horrible thing." And a lot of us participate in it.

    Jordan: We might be doing really horrible things to animals and still be helping push forward society in a beautiful way. And we're not miserable people who should be flagellating ourselves in public because of our failings.

    Greg: I think a lot about this desire to hate ourselves. My mentor Harvey Silverglate, when I mentioned this, had no idea what I was talking about—"Who feels that way?" But if you look at history, I'm always amazed that Christianity, which had this very high ascetic quality—the Gnostics in particular had this sense that the body is evil, very literally self-hating—beat out the much more fun polytheism of Rome. That says a lot about human nature: there's some subset of humanity that, if you tell them awful things about themselves and everyone around them, they'll be like, "Yeah, that's totally right. I am an awful person."

    We've put up with a kind of popular misanthropy. My friend Alyssa Rosenberg, who used to work for the Washington Post, talked about how often she would get responses that were basically, "Yeah, the planet would be much better if humans didn't exist." And it's like—that's a wonderful luxury belief you have there. I actually happen to be partial to us weird little animals.

    Jordan: Me too. I really hope we make it. I think about this with my kids. There are sometimes people who say, "Why would you have kids?" On one hand, the population's declining. Some extreme environmentalists—I have a friend, a climbing buddy, who says, "I don't want to have kids because I don't want to have a negative impact on the world." And I'm like, man—if we all died today, I would be so glad that we existed. I'd be so glad my kids existed, even for two and a half years. And they would be glad they existed too.

    Greg: Yeah. There was a great line in Erika Christakis's book The Importance of Being Little—the most important thing is the kid that's right in front of you. Not exactly how she put it, but the idea that there's a tendency to think so much about what they will be as opposed to what they are right now. I didn't really understand that until I had kids. And then it was just like, all that matters right now with my little boys is today and how wonderful it is just to be with them today. That's the part that matters, because it's the part that's really happening.

    I named them Maxwell and Benjamin—for James Clerk Maxwell and Benjamin Franklin, because I wanted to name them after scientists.

    Jordan: Do they like science now?

    Greg: Very much. The younger boy Maxwell—he's got a little bit of a lisp, but he's also this 90th-percentile giant, natural-linebacker kid. It's always nice to hear him talk about James Clerk Maxwell. They love who they're named for.


    The Roots of Self-Hatred and Self-Criticism

    Jordan: What gives rise to self-hatred? What is it about human nature that makes us so susceptible to wanting to shame ourselves?

    Greg: I've thought a fair amount about this. I went back and read the entire Bible a couple of years ago—even the parts that are exhaustively explaining how long the vestments have to be and all of that stuff—because I wanted to get a sense of the whole thing. And it was super interesting, I'm really glad I did it.

    When it comes to the roots of self-hatred, when I used to write fiction, I had one character say—and this is a little gross—"Don't let your life be driven by the instincts that make cats bury their own shit." Essentially, there's this sense of the sacred and the contaminated that is very deep. Jonathan Haidt writes about this too—the idea of sacredness and purity. I think some of it comes from an instinct to make sure you're not in a literally contaminated space, a kind of almost-OCD-like drive to purify that causes purification rituals. It's related to that instinct, but it's also sometimes about a gender difference. The neuroscientist I'm writing the book with wrote about how a particular kind of personality disorder in men manifests in violent outbursts, whereas she thinks the same condition in women tends to internalize, turning into a more powerful self-hatred rather than an outwardly directed hatred.

    I think there are personality types that are much more prone to it. And somehow it wins a certain amount of credibility, because if you're self-critical—and you should be—a lot of people respond with, "Okay, this person isn't just arrogant, they don't always just think they're right." That wins some credibility. But sometimes self-hatred can look like merely being self-critical.

    Jordan: Yeah. I've been noticing this personally in the past few weeks. I have a very strong habit of signaling a kind of humility, which I think is sometimes very innocent and unconscious. But it's actually over-signaling. I think I often have more confidence than I'm putting out, and it's because of this social benefit I get from being humble. I don't like that about myself. I'd like to be more honest and straightforward.

    Greg: I feel like for me that's always such a complicated feeling, because I got it so beaten into my head that the worst thing you could be was arrogant. So much so that the only way to deal with that was to internalize it as a kind of self-hatred—which I'm getting over, over time. But it can depend on the day. The more pleasant life is to actually be self-critical but not self-hating. That's what you want to reach. And once you're there, having that kind of cool, humble confidence is probably the best of all worlds.


    Stoicism: Seneca, Marcus Aurelius, and Practical Wisdom

    Jordan: Connecting the self-hatred to neuroticism, and then back to Stoicism and Buddhism to some extent—it actually gives me some insight. There's something about self-hatred that gives us a sense of control over an environment that is wildly chaotic.

    Greg: Yep.

    Jordan: It's a false control, because to a certain extent the self-criticism is the right instinct: what I can control is myself. I can learn self-discipline, keep my impulses at bay. And we eventually get to this Stoic thing of: the only thing I really have control over is the way I view things, how I make sense of this.

    Greg: Yeah. My first introduction to actually reading the Stoics—and this might annoy some of your listeners—was Marcus Aurelius. And when I read Meditations, I was like, "This guy's depressed." I know from personal experience what depression sounds like.

    I only started reading Seneca's Letters to a Young Man maybe five or ten years ago. And I still put them on my headphones when I'm walking around having a hard time, because it's so filled with wisdom and wit. The idea that someone's jokes from the Roman Empire are still funny—that's Seneca. And he was also George Washington's favorite, I found out. But that version of Stoicism I really loved. I also loved that he had the humility to constantly be talking about Epicurus.

    Jordan: I came to appreciate the Epicureans because of reading Seneca. It's so cool.

    Greg: Yeah. And the fact that he was like, "Whatever, I don't think my stuff is always right"—and I'm like, you're awesome.

    Jordan: Talk about that humble confidence. On the one hand, he's constantly referencing his rivals. On the other hand, he called his shot—he told Lucilius, "People are going to know your name in thousands of years because of my letters." And I'm reading this two thousand years later.

    Greg: What's arrogant—but also accurate. Turned out he was right.

    Jordan: I was lucky. I didn't read Meditations until after I'd read Seneca, so I was like, yeah, he's depressed, but it's cool in these certain ways.

    Greg: Definitely interesting, particularly for an emperor to be like that. But I found it didn't make me want to run out and be Stoic. Whereas some of the other Stoic ways of looking at the world I find so incredibly useful.


    What Can People Do? Contacting FIRE and Standing Up for Free Speech

    Jordan: I know we're going to come to an end pretty soon, but I could keep jamming forever. I'm curious: for people in personal situations—somebody told me a story recently about a local journalist who was writing a story about a sports team, and the sports team's owner had influence at the local paper, so they pulled her story. What do people do to stand up for free speech when they're in these kinds of personal situations of censorship?

    Greg: FIRE's not that small of an organization anymore. Our budget this year is $35 million. We're about 130 to 140 employees. So we do a lot now. One thing we do that's not as well appreciated, because it's more regional, is we defend people in these really seemingly small cases that are more local. That's right, but it also helps convey to people that free speech isn't just about what's going on in Silicon Valley or Washington, D.C. It's about you and your right to tell people you think the local police department is corrupt or the local fire station is spending too much money.

    The most horrifying case I've possibly seen in my entire career: I was horrified by the murder of Charlie Kirk. I actually went and spoke at the place he was killed, about a month later, just to talk about it. But unfortunately, in response to the murder, there was this backlash against anybody who said anything even slightly insensitive about it.

    There was this one kind of ex-cop in Tennessee—a local liberal gadfly type. When an email went out saying, "We're all getting together to do a vigil for Charlie Kirk," he sent back a meme of when President Trump said about a school shooting, "We have to get over it," basically saying, "This is my thoughts about this." Was it sensitive? No. Is it using an actual quote to criticize the president, in a way that's immemorial? Certainly he has the right to do it.

    He was put in jail for 37 days. They made this complete BS argument that because this was about a school shooting and because there was a local high school with a similar name, somehow that was threatening to that school—even though people at that school said they didn't feel threatened. You have to go back to the 1920s to find an example of someone punished that harshly for speech that's clearly protected.

    Back to your question about what to do: contact FIRE. I know it sounds maybe too simple, but we're very effective at helping people, even in smaller cases. I'm trying to expand our tech work, because I do think there are plenty of valid criticisms of AI and social media, but I'm afraid that in our moment of skepticism about these technologies, people are going to push for massive government regulation—like, why not create a new Federal Communications Commission that only deals with AI? And I'm like, because it would be a disaster. That's why.

    One of the easiest things people can do—even when their own free speech rights aren't threatened—is what particularly prominent people can do: when someone they really disagree with is getting in trouble for their speech, make a point of saying, "I support this person's right to free speech." Period. Unapologetically. That's the stuff that can really win people over.

    Jordan: That's a really great point. We know this in business—there's a term for it: psychological safety. You're so much better off if your team can disagree with each other. If that makes for a better company, wouldn't it make for a better nation?

    Greg: Absolutely. It bums me out a little bit that they use the word "safety" in that, because safety gets used as a rationale for so much censorship. But I like that Adam Grant made the point that it means the exact opposite of what it would mean in the rest of society: safe to be wrong, safe to disagree with the boss, safe to experiment, safe to do devil's advocacy. That's the stuff that makes free speech more useful. But unfortunately, on campus it too often gets used as code for "doesn't hurt anyone's tummy."

    Jordan: Yeah, it's an unfortunate choice of words, but the concept is in the right direction. Reach out to FIRE. And the other thing I'm wondering—you mentioned this at the Liv Boeree podcast as well—the U.S. is a case-law system, so these small cases that seem like not a big deal actually set a really important precedent for everything going forward.

    Greg: Absolutely. I remember getting a criticism of the ridiculous number of cases we cite in The Canceling of the American Mind—someone saying, "Those are just anecdotes." I'm like, anecdotes are secondhand stories without documentation. Don't call them anecdotes. These are highly documented. And you know what highly documented situations of violations of rights are called when they end up in court? They're called precedent.

    How many stories seemed like little stupid things that nobody thought would be a big deal? One of the dumbest of all time—which resulted in a bad decision—was this 18-year-old kid who showed up at the passing of the Olympic torch when it went through Alaska with a sign that said, "Bong Hits 4 Jesus." And I remember hearing that and thinking, "Okay, that's actually pretty funny." But the school claimed it was an official high school event—which was a stretch—and the kid got punished. He wasn't even a kid; he was 18, he wasn't at school, he was on public property.

    When this went in front of the Supreme Court, it resulted in one of the most incoherent decisions ever, where they were trying to parse through what "Bong Hits 4 Jesus" meant. In the opinion, they have things like, "It could mean 'bong hits' [are good] for Jesus, or 'do bong hits' for Jesus..." And I'm like, no—there's a really simple word for this. It's a joke. It's a pretty funny joke. Don't take it that seriously.

    But there are so many other examples. For high school free speech—the case of someone wanting to wear a black armband in protest of the Vietnam War, and whether you can do that kind of peaceful protest. The answer was yes. That was a small-town thing that became precedent for the rest of the country. So don't always assume that your little case is really all that little.


    Closing

    Jordan: That feels so good to hear. Thank you. Anything else you want to say before we close out?

    Greg: Always. We're looking for principled people. Since we make both the right and the left mad, we have donors who love it when we fight wokeness but really don't like when we take on MAGA, and we have people who love it when we take on MAGA but really don't like when we take on the left. That means the only kind of people who support FIRE are people who really get it and are really principled. So if you are one of those people, if you know those kinds of people, we really do need your support—because we have to keep growing, because the threat to free speech is actually getting greater, unfortunately.

    Jordan: Awesome. Literally when we get off this call, I'm going to go start a monthly donation.

    Greg: Thank you so much. It means the world to me, Jordan.

    Jordan: I really love what you're doing. I appreciate getting to know you better. This was a blast—so much overlap with Stoicism, with all of it.

    Greg: Real pleasure. Stay in touch.

    Jordan: Likewise. Beautiful.

     

     


    Show Notes & References

    People Mentioned

    • Greg Lukianoff — Attorney, author, and President & CEO of FIRE (Foundation for Individual Rights and Expression). Co-author of The Coddling of the American Mind and The Canceling of the American Mind. Website: thefire.org
    • Harvey Silverglate — Civil liberties attorney and co-founder of FIRE (alongside Alan Charles Kors in 1999). Co-author of The Shadow University.
    • Jonathan Haidt — Social psychologist at NYU Stern, co-author of The Coddling of the American Mind with Lukianoff, and author of The Righteous Mind: Why Good People Are Divided by Politics and Religion.
    • Rikki Schlott — Journalist and co-author of The Canceling of the American Mind (2023) with Greg Lukianoff.
    • Liv Boeree — Former professional poker player and science communicator. Hosts the Win-Win podcast.
    • Nadine Strossen — Former president of the ACLU (served right after the Skokie case), now a Senior Fellow at FIRE. Co-author of The War on Words (2025) with Lukianoff.
    • Ira Glasser — Former executive director of the ACLU (1978–2001); subject of the documentary Mighty Ira. Serves on FIRE's advisory council.
    • Nico Perrino — Executive Vice President of FIRE; co-directed and produced the documentary Mighty Ira (2020).
    • Larry Summers — Economist, former President of Harvard University (2001–2006), former U.S. Secretary of the Treasury. Known for his 2005 remarks on gender disparities in STEM.
    • Susan David — Psychologist at Harvard Medical School, author of Emotional Agility (2016). Credited with the phrase "feelings are data, not directions."
    • Bryan Caplan — Economist at George Mason University, author of The Case Against Education: Why the Education System Is a Waste of Time and Money (2018).
    • Adam Grant — Organizational psychologist at Wharton, author of Think Again and others. Referenced for his discussion of psychological safety.
    • Pamela Paresky — Psychologist and writer; credited here with the concept of "moral pollution" in public discourse.
    • Erika Christakis — Early childhood educator and author of The Importance of Being Little: What Young Children Really Need from Grownups (2016).
    • Alyssa Rosenberg — Journalist, formerly at The Washington Post, referenced in discussion of popular misanthropy.
    • Karl Popper — Philosopher of science, known for the concept of falsifiability and Conjectures and Refutations: The Growth of Scientific Knowledge (1963).
    • Montesquieu — Enlightenment philosopher, author of The Spirit of the Laws (1748), influential in the theory of separation of powers.
    • James Madison — Fourth President of the United States, principal author of the U.S. Constitution and the Bill of Rights.
    • Philip Zimbardo — Psychologist who ran the Stanford Prison Experiment (1971), which has faced extensive criticism for methodological flaws and accusations of coaching participants.
    • Stanley Milgram — Social psychologist known for his obedience experiments (1961–1963), which have been replicated in subsequent studies.
    • Robert Kegan — Developmental psychologist at Harvard, known for his model of adult psychological development outlined in The Evolving Self (1982) and In Over Our Heads (1994). Referenced here for his concept of "self-authoring" (fourth order of consciousness) versus "socialized" (third order).
    • Seneca — Roman Stoic philosopher (c. 4 BC–AD 65), known for Letters to Lucilius (also called Moral Letters to Lucilius or Letters from a Stoic).
    • Marcus Aurelius — Roman emperor (AD 161–180) and Stoic philosopher, author of Meditations.
    • Werner von Braun — German-American rocket engineer; instrumental in NASA's Apollo program, and a former member of the Nazi Party.
    • Thomas Malthus — English economist and clergyman, known for An Essay on the Principle of Population (1798) and the "Malthusian trap" theory of population growth.
    • Jean-Jacques Rousseau — Genevan philosopher, author of Émile, or On Education (1762) and The Social Contract (1762).
    • Karl Marx — German philosopher and economist, author of Das Kapital and co-author of The Communist Manifesto.

    Books Referenced

    • The Coddling of the American Mind: How Good Intentions and Bad Ideas Are Setting Up a Generation for Failure — Greg Lukianoff & Jonathan Haidt (2018, Penguin Press)
    • The Canceling of the American Mind: Cancel Culture Undermines Trust and Threatens Us All — Greg Lukianoff & Rikki Schlott (2023, Simon & Schuster)
    • The Case Against Education: Why the Education System Is a Waste of Time and Money — Bryan Caplan (2018, Princeton University Press)
    • The Importance of Being Little: What Young Children Really Need from Grownups — Erika Christakis (2016, Penguin)
    • Emotional Agility: Get Unstuck, Embrace Change, and Thrive in Work and Life — Susan David (2016, Avery)
    • The Republic — Plato (c. 375 BC)
    • Conjectures and Refutations: The Growth of Scientific Knowledge — Karl Popper (1963)
    • Émile, or On Education — Jean-Jacques Rousseau (1762)
    • Letters to Lucilius (Moral Letters to Lucilius) — Seneca (c. AD 65)
    • Meditations — Marcus Aurelius (c. AD 170–180)

    Films & Media

    • Mighty Ira: A Civil Liberties Story (2020) — Documentary profiling Ira Glasser's career at the ACLU, directed by Nico Perrino, Chris Maltby, and Aaron Reese. Available on Amazon Prime, Apple TV, and other streaming platforms.
    • Can We Take a Joke? (2015) — Documentary about comedy and free speech, executive produced by Greg Lukianoff.
    • Greg Lukianoff's TED Talk (2025) — On "mob censorship" and why free speech is the best check on power.
    • The Eternally Radical Idea — Greg Lukianoff's Substack newsletter.

    Key Concepts & Topics

    • FIRE (Foundation for Individual Rights and Expression) — Founded in 1999 by Alan Charles Kors and Harvey Silverglate. Defends free speech and individual rights, with a focus on campuses and expanding into broader civil liberties work. Budget of ~$35 million; ~130–140 employees. Website: thefire.org
    • The Three Great Untruths — From The Coddling of the American Mind: (1) "What doesn't kill you makes you weaker," (2) "Always trust your feelings," (3) "Life is a battle between good people and evil people."
    • The Fourth Great Untruth — From The Canceling of the American Mind: "All bad people only have bad opinions" — the fallacy of discounting someone's arguments based on their moral character rather than engaging with the substance.
    • High Decoupling — A Rationalist community concept: the ability to evaluate ideas independently of the moral character of the person expressing them.
    • Conjectures and Refutations (Popper) — The epistemological model that knowledge grows by proposing bold conjectures and then attempting to refute them; we approach truth not directly, but by eliminating what is false.
    • Stanford Prison Experiment — Philip Zimbardo's 1971 experiment, widely criticized for methodological problems. Recent investigations suggest participants were coached to behave abusively, undermining the study's claims about human nature.
    • Milgram Obedience Experiments — Stanley Milgram's 1961–1963 experiments demonstrating people's willingness to obey authority figures even when instructed to administer harm. Unlike the Stanford Prison Experiment, subsequent replications have largely supported the original findings.
    • Skokie Case (1977–78) — The ACLU's defense of a neo-Nazi group's right to march in Skokie, Illinois, home to many Holocaust survivors. The case nearly bankrupted the ACLU but became a landmark example of principled free-speech defense. Frank Collin led the neo-Nazi group; David Goldberger was lead ACLU attorney.
    • Morse v. Frederick ("Bong Hits 4 Jesus") — 2007 Supreme Court case. Joseph Frederick, an 18-year-old student, displayed a banner reading "BONG HiTS 4 JESUS" at a school-supervised event. The Court ruled 5–4 that schools may restrict student speech that can be interpreted as promoting illegal drug use—a decision widely criticized as incoherent.
    • Tinker v. Des Moines (1969) — Landmark Supreme Court case establishing that students do not "shed their constitutional rights to freedom of speech or expression at the schoolhouse gate." Arose from students wearing black armbands in protest of the Vietnam War.
    • Kegan's Developmental Model — Robert Kegan's framework of adult psychological development. "Third order" consciousness involves being shaped by external expectations and social roles (the "socialized mind"); "fourth order" involves self-authorship—constructing one's own identity, values, and worldview.
    • Psychological Safety — A concept popularized by Amy Edmondson and discussed by Adam Grant, referring to an environment where people feel safe to take interpersonal risks, disagree, and make mistakes without fear of punishment. In the workplace, it means the freedom to be wrong and to challenge authority—distinct from the campus use of "safety" to mean protection from uncomfortable ideas.
    • The Cosmos Institute — An organization working on truth-seeking and structured friction against received knowledge, referenced by Lukianoff.
    • UpTrust — Jordan Myska Allen's trust-based social media platform, designed to algorithmically prioritize credibility and nuance over engagement bait.
    • Relatefulness — A community and practice co-founded by Jordan Myska Allen, focused on relational depth and contemplative approaches to human connection.

    Some questions inspired by this post on UpTrust.

     

     

    Paulleverich•...
    Reading through this, a few things really stood out to me.   The part about only really being able to care about what ten people think of you is honestly one of the most practical pieces of wisdom in the whole conversation....
    social psychology
    philosophy of science
    free speech
    media and internet algorithms
    public policy and regulation
    Comments
    0
  • annabeth avatar

    Looking for bridges in views about the second Trump administration. I'm currently aware of four views:

    • This is the worst thing ever, I'm terrified
    • This is the best thing ever, I'm thrilled
    • I don't pay attention to politics, so far my life feels exactly the same
    • Some of the changes seem pretty cool so far, but we'll see

    Where are the middle grounds? I want to know how to build bridges in my personal connections when politics comes up these days.

     

     

    Ambiguously•...
    No honey. You mistake "tokens" to represent en masse. That's not how the real world works. The mass majority of minority groups are unwilling to play Uncle Tom to make you feel better about yourself....
    social psychology
    cultural studies
    racial dynamics
    Comments
    0
  • Hannah Aline Taylor avatar

    Isn't It Ironic? . Don't you think? 

    A little toooo ironic. 

    I really do think. 

    This site where we are upping trust lets users post under a pseudonym. 

    Every time I see a post or comment from a pseudonym, screen name, handle, what have you, after first wondering if it's another godforsaken AI bot stealing my eyeballs away from human creations, I remember a line from the Tao Te Ching; 

    To give no trust

    is to get no trust. 

    v.17, Lao Tzu x Ursula LeGuin 

    Wintermoon56•...

    I don't agree. you really don't know and I understand. some of us have LIFE altering reasons. See ppl judge to quickly. But in today's effed up world....we choose the bear.

    social psychology
    society and culture
    Comments
    0
  • blake avatar

    The Decline and Fall of the Roman Empire, probably via use of the word "optics" ;) . I've been reading the Decline and Fall of the Roman Empire (abridged*, of course, at least to start with!). New to the topic, and I’ve never identified as a history buff, but I’m really loving it. I wanted to write a short post about it, but couldn’t quickly figure out how to say what I wanted briefly, so here’s a long one!

    It feels like a bird's-eye view of modern politics, in many ways, but especially regarding "The American Experiment." I'm sure this comparison isn't new--it's probably a huge part of what makes Decline and Fall popular today, despite being published in 1776. Since there's a whole trope about Rome buffs, I imagine many of you have hashed over all this a ton previously.

    The early part of Decline and Fall starts with how amazing Rome was. Of course, it built on other civilizations and governments that came before it, but I think we these days have a hard time imagining just how surprisingly modern it would seem to us, if we were transplanted to the Roman Empire in its heyday. Of course we have tons of hard tech they didn't. But on the social level, I think a lot of it would feel spookily familiar. (I’m sure the author and I are both missing or leaving out huge ways it’s different. But I think there’s still a lot we can learn from it.)

    Widespread assumption of and dedication to: rule of law, trial by peers, market-based economy. And somehow the start of the Roman Empire manifested a deep dedication among citizens and leaders to a Republic as the form of government. No nepotism, no monarchy, no might makes right. Government of the people, by the people, for the people, at least in spirit--my sense is people and government and military were all aligned in their dedication to that spirit. 

    And peace! Peace, for centuries, throughout a huge swath of the known world, where that hadn’t happened before. There was a kind of national religion they inherited from the Greeks, but they seem to have been even more dedicated to religious tolerance than to their religion (prior to Constantine and the Christians taking over). Sure, there was kind of constant fighting on the edges of the empire, including always against the pesky Gauls and German barbarians, who really hated the idea of being part of the big empire. But mostly, and especially compared to times before in much of Europe, you could live safe in your home with your family, for generations even, protected by law-abiding and law-enforcing local authorities, backed up by the Roman army when needed, truly answerable to the people through the representation of the Senate, such as it was, and it was pretty great as far as I can tell. 

    Now, the bird's-eye view of the modern USA comes in when, generation after generation, leader after leader, eventually monarch after monarch, the common-knowledge shared dedication to being a Republic and to all the ideas above, faded over time. First, one or two leaders came along who had enough sway over the army and enough popularity with the people that they were able to, against the grain of all Republic dedication, declare themselves effective leaders of the empire. First humbly, as first-among-many. Then with time, openly and pompously. Then with more time, it became obvious to everyone that the Republic was only a Republic in name, that it was just obviously "the way things worked" that the army effectively got to decide who became emperor, and that as soon as the army switched loyalties, you'd better be ready for a change, including probably a bunch of people getting killed for being on the wrong side. 

    The thing about Decline and Fall, wrt this kind of degradation, is you get to read real human stories of this happening, again, and again, and again, and again. The same patterns, the different humans with unique circumstances playing them out. 

    Why did the dedication to the original ideals degrade with time? I think the same natural processes, and lack of opposing processes, have led the US and myriad other democracies down similar paths over time. People and groups learn to subvert the system to get more of what they want in the short term, sacrificing the common-knowledge dedications and ideals that support the good things they have in the world. They pay less attention to the whole than is needed to maintain it. 

    I'll name what I see today as one instance of roughly this kind of degradation, and I hope it's a little spicy. I have been part of many, many conversations in organizations where, when discussing some strategic question for the organization, the word "optics" comes up. For the uninitiated, the word "optics" in this context means: people could see what we're doing and have interpretations of it. We don't want those interpretations to have bad consequences for us. So let's be sure to include in our strategizing some component of consideration for trying to get people's impressions (the public, journalists, stakeholders, or etc) to be at least neutral. I can understand that. But I want to live in a world where we're creating the whole we want, not mostly attempting to persuade or convince or if nothing else not be noticed by parts of society that IMO we ought to relate to as peers. If we all practice distrusting our peers' sense-making processes in this way of strategizing about "optics", we'll all end up with a society with worse and less sense-making. So what do I want instead? I want us to take actions with integrity. Yes to being aware of our reputation (individually, organizationally, etc) and acting with integrity.

    (*The abridged version I landed on, after some back and forth about versions with Claude, is the Womersly version. I love it. You get 100-200 pages of the above, which was just right for this first-timer.)

    #DeepTakes

    jordanSA•...

    You make a really good point about some downsides of throwing optics to the wind. 

    I think there's a real big difference between pre-social self-expression and post-social authenticity. 

    social psychology
    self-expression
    Comments
    0
  • blake avatar

    The Decline and Fall of the Roman Empire, probably via use of the word "optics" ;) . I've been reading the Decline and Fall of the Roman Empire (abridged*, of course, at least to start with!). New to the topic, and I’ve never identified as a history buff, but I’m really loving it. I wanted to write a short post about it, but couldn’t quickly figure out how to say what I wanted briefly, so here’s a long one!

    It feels like a bird's-eye view of modern politics, in many ways, but especially regarding "The American Experiment." I'm sure this comparison isn't new--it's probably a huge part of what makes Decline and Fall popular today, despite being published in 1776. Since there's a whole trope about Rome buffs, I imagine many of you have hashed over all this a ton previously.

    The early part of Decline and Fall starts with how amazing Rome was. Of course, it built on other civilizations and governments that came before it, but I think we these days have a hard time imagining just how surprisingly modern it would seem to us, if we were transplanted to the Roman Empire in its heyday. Of course we have tons of hard tech they didn't. But on the social level, I think a lot of it would feel spookily familiar. (I’m sure the author and I are both missing or leaving out huge ways it’s different. But I think there’s still a lot we can learn from it.)

    Widespread assumption of and dedication to: rule of law, trial by peers, market-based economy. And somehow the start of the Roman Empire manifested a deep dedication among citizens and leaders to a Republic as the form of government. No nepotism, no monarchy, no might makes right. Government of the people, by the people, for the people, at least in spirit--my sense is people and government and military were all aligned in their dedication to that spirit. 

    And peace! Peace, for centuries, throughout a huge swath of the known world, where that hadn’t happened before. There was a kind of national religion they inherited from the Greeks, but they seem to have been even more dedicated to religious tolerance than to their religion (prior to Constantine and the Christians taking over). Sure, there was kind of constant fighting on the edges of the empire, including always against the pesky Gauls and German barbarians, who really hated the idea of being part of the big empire. But mostly, and especially compared to times before in much of Europe, you could live safe in your home with your family, for generations even, protected by law-abiding and law-enforcing local authorities, backed up by the Roman army when needed, truly answerable to the people through the representation of the Senate, such as it was, and it was pretty great as far as I can tell. 

    Now, the bird's-eye view of the modern USA comes in when, generation after generation, leader after leader, eventually monarch after monarch, the common-knowledge shared dedication to being a Republic and to all the ideas above, faded over time. First, one or two leaders came along who had enough sway over the army and enough popularity with the people that they were able to, against the grain of all Republic dedication, declare themselves effective leaders of the empire. First humbly, as first-among-many. Then with time, openly and pompously. Then with more time, it became obvious to everyone that the Republic was only a Republic in name, that it was just obviously "the way things worked" that the army effectively got to decide who became emperor, and that as soon as the army switched loyalties, you'd better be ready for a change, including probably a bunch of people getting killed for being on the wrong side. 

    The thing about Decline and Fall, wrt this kind of degradation, is you get to read real human stories of this happening, again, and again, and again, and again. The same patterns, the different humans with unique circumstances playing them out. 

    Why did the dedication to the original ideals degrade with time? I think the same natural processes, and lack of opposing processes, have led the US and myriad other democracies down similar paths over time. People and groups learn to subvert the system to get more of what they want in the short term, sacrificing the common-knowledge dedications and ideals that support the good things they have in the world. They pay less attention to the whole than is needed to maintain it. 

    I'll name what I see today as one instance of roughly this kind of degradation, and I hope it's a little spicy. I have been part of many, many conversations in organizations where, when discussing some strategic question for the organization, the word "optics" comes up. For the uninitiated, the word "optics" in this context means: people could see what we're doing and have interpretations of it. We don't want those interpretations to have bad consequences for us. So let's be sure to include in our strategizing some component of consideration for trying to get people's impressions (the public, journalists, stakeholders, or etc) to be at least neutral. I can understand that. But I want to live in a world where we're creating the whole we want, not mostly attempting to persuade or convince or if nothing else not be noticed by parts of society that IMO we ought to relate to as peers. If we all practice distrusting our peers' sense-making processes in this way of strategizing about "optics", we'll all end up with a society with worse and less sense-making. So what do I want instead? I want us to take actions with integrity. Yes to being aware of our reputation (individually, organizationally, etc) and acting with integrity.

    (*The abridged version I landed on, after some back and forth about versions with Claude, is the Womersly version. I love it. You get 100-200 pages of the above, which was just right for this first-timer.)

    #DeepTakes

    josefine•...
    Oh yeah, that definitely resonates more. It actually feels like another #deeptake :)  I think I'm also getting from you that there's some value for you in honesty around reputation, such that one actually can rely on it within the system....
    ethics
    social psychology
    communication studies
    Comments
    0
  • blake avatar

    The Decline and Fall of the Roman Empire, probably via use of the word "optics" ;) . I've been reading the Decline and Fall of the Roman Empire (abridged*, of course, at least to start with!). New to the topic, and I’ve never identified as a history buff, but I’m really loving it. I wanted to write a short post about it, but couldn’t quickly figure out how to say what I wanted briefly, so here’s a long one!

    It feels like a bird's-eye view of modern politics, in many ways, but especially regarding "The American Experiment." I'm sure this comparison isn't new--it's probably a huge part of what makes Decline and Fall popular today, despite being published in 1776. Since there's a whole trope about Rome buffs, I imagine many of you have hashed over all this a ton previously.

    The early part of Decline and Fall starts with how amazing Rome was. Of course, it built on other civilizations and governments that came before it, but I think we these days have a hard time imagining just how surprisingly modern it would seem to us, if we were transplanted to the Roman Empire in its heyday. Of course we have tons of hard tech they didn't. But on the social level, I think a lot of it would feel spookily familiar. (I’m sure the author and I are both missing or leaving out huge ways it’s different. But I think there’s still a lot we can learn from it.)

    Widespread assumption of and dedication to: rule of law, trial by peers, market-based economy. And somehow the start of the Roman Empire manifested a deep dedication among citizens and leaders to a Republic as the form of government. No nepotism, no monarchy, no might makes right. Government of the people, by the people, for the people, at least in spirit--my sense is people and government and military were all aligned in their dedication to that spirit. 

    And peace! Peace, for centuries, throughout a huge swath of the known world, where that hadn’t happened before. There was a kind of national religion they inherited from the Greeks, but they seem to have been even more dedicated to religious tolerance than to their religion (prior to Constantine and the Christians taking over). Sure, there was kind of constant fighting on the edges of the empire, including always against the pesky Gauls and German barbarians, who really hated the idea of being part of the big empire. But mostly, and especially compared to times before in much of Europe, you could live safe in your home with your family, for generations even, protected by law-abiding and law-enforcing local authorities, backed up by the Roman army when needed, truly answerable to the people through the representation of the Senate, such as it was, and it was pretty great as far as I can tell. 

    Now, the bird's-eye view of the modern USA comes in when, generation after generation, leader after leader, eventually monarch after monarch, the common-knowledge shared dedication to being a Republic and to all the ideas above, faded over time. First, one or two leaders came along who had enough sway over the army and enough popularity with the people that they were able to, against the grain of all Republic dedication, declare themselves effective leaders of the empire. First humbly, as first-among-many. Then with time, openly and pompously. Then with more time, it became obvious to everyone that the Republic was only a Republic in name, that it was just obviously "the way things worked" that the army effectively got to decide who became emperor, and that as soon as the army switched loyalties, you'd better be ready for a change, including probably a bunch of people getting killed for being on the wrong side. 

    The thing about Decline and Fall, wrt this kind of degradation, is you get to read real human stories of this happening, again, and again, and again, and again. The same patterns, the different humans with unique circumstances playing them out. 

    Why did the dedication to the original ideals degrade with time? I think the same natural processes, and lack of opposing processes, have led the US and myriad other democracies down similar paths over time. People and groups learn to subvert the system to get more of what they want in the short term, sacrificing the common-knowledge dedications and ideals that support the good things they have in the world. They pay less attention to the whole than is needed to maintain it. 

    I'll name what I see today as one instance of roughly this kind of degradation, and I hope it's a little spicy. I have been part of many, many conversations in organizations where, when discussing some strategic question for the organization, the word "optics" comes up. For the uninitiated, the word "optics" in this context means: people could see what we're doing and have interpretations of it. We don't want those interpretations to have bad consequences for us. So let's be sure to include in our strategizing some component of consideration for trying to get people's impressions (the public, journalists, stakeholders, or etc) to be at least neutral. I can understand that. But I want to live in a world where we're creating the whole we want, not mostly attempting to persuade or convince or if nothing else not be noticed by parts of society that IMO we ought to relate to as peers. If we all practice distrusting our peers' sense-making processes in this way of strategizing about "optics", we'll all end up with a society with worse and less sense-making. So what do I want instead? I want us to take actions with integrity. Yes to being aware of our reputation (individually, organizationally, etc) and acting with integrity.

    (*The abridged version I landed on, after some back and forth about versions with Claude, is the Womersly version. I love it. You get 100-200 pages of the above, which was just right for this first-timer.)

    #DeepTakes

    blakeSA•...
    Love this invitation, and I'll bite.  Hmm, first place my mind goes: optics feels like an instance inside a larger category of taking the social fabric as something to be manipulated, to try to cause people to think particular things or feel particular ways about you or some...
    ethics
    political science
    social psychology
    communication
    Comments
    0
  • jordan avatar

    The Relateful Company should embrace more job titles. We’re under-appreciating orange.

    We’ve included the green critiques, like the classic:

    What gets measured gets managed — even when it’s pointless to measure and manage it, and even if it harms the purpose of the organisation to do so - V. F. Ridgway, 1956

    But we need to embrace more healthy competition, striving for excellence, even rankings.

    one way we can do this is to make more liberal use of titles, and brag on people. @Valerie Daniel is the MANAGING DIRECTOR, and we should have her listed as such in emails and things

    What else is healthy orange and how can we transclude it?
    What do we already do that is already healthy orange?

    jordanSA•...
    I had some long discussions with people like Ellyn and Thea about their disagreement; I think they were still missing the point because the large body of work about coming up with scores, as I understood (or misunderstood it) was more about assigning something that could be...
    social psychology
    interpersonal communication
    psychometrics
    collective awareness
    objectivity in scoring
    system design
    Comments
    0
  • X

    New structures for family-friends? Chatting with a friend recently and came up with this novel idea.

    Historically, many people would end up married, having kids, and having responsibilities to their family and local community and groups.

    These days, we have less family and civic integrity, less people are having kids. More people are creating their family of choice with friends.

    I think there’s a general love and aliveness everyone wants to express and be in connection with.

    But without the usual routes of kids/religion/local community, it doesn’t get routed well anymore.

    We need more structures/ideas/understanding to support new kinds of families and community structures.

    Examples:
    How about an app that makes it easier to crowd source among trusted local friends to babysit?

    Most housing is built around one nuclear family 1-4 bedrooms. But what about community homes with larger kitchens and living rooms and smaller but more bedrooms?

    I’m gesturing at this general area at the idea that modern, industrial civilization is built around nuclear families but we have a lot more forms being generated now but still lagging behind in the idea/social practice/phys infrastructure to match.

    Xuramitra PPARK•...
    Complexity def gets higher and reason why so many of us fall back to default patterns/norms or keep things super small sizes. Some friends of mine turned me towards https://mikikashtan.org/ work that seems interesting....
    economics
    social psychology
    organizational behavior
    community building
    experimental economics
    Comments
    0
  • B

    Why you should post more: Everything is a mirror of everything.

    We’re all censoring most of our awareness.

    Uptrust is a currently curated community where we can actually practice thinking.

    The more I post the more direct I’m being with everyone in my life. (I have an embedded belief that if I do anything anywhere then I should that anything in more everywheres…but I still curate).

    Post about why you’re not posting.

    Post about questions you’re asking yourself.

    Post about your anxiety.

    Post to express art.

    This shit won’t last, this fun safe newborn ward. Use this time now to try something. Create a fake name or another account so you can try it from anonymity.

    Huge opportunity to bust out of our norms.

    jordanSA•...
    I think keeping this tension and question alive in our awareness, and then running experiments that come from staying present with the tension (desire for the fun/safe/curated, and idea that it won’t last), will help it last longer, and potentially give us insight into...
    social psychology
    group dynamics
    internet culture
    community management
    human computer interaction
    Comments
    0
  • jordan avatar

    Racism through a developmental lens. unfinished draft…
    note: I’m totally uninformed here…

    • Red: Does this benefit me?

    • Amber: My race is simply better (or worse) than yours. We perpetuate it because that’s good.

    • Orange: Racism is a thing we transcend by being worldcentric and meritocratic; we perpetuate it by constantly looking at everything through the racism lens.

    • Green: Systemic racism is everywhere (and at the root of many of our social problems); we transcend it by balancing the scales with education and programs to help the victims and stop the perpetrators; we perpetuate it by taking advantage of our privileges, ignoring it, and doing nothing.

    • Teal: Systemic racism is real, but it’s mostly an unconscious self-organizing system that’s perpetuated because of the incentives that keep things how they are. We transcend by owning our projection, and by setting up systems that reward non-racism for each level of development in the currency that level values.

    • Turquoise: We never transcend racism, it’s a construct we enact through conscious embracing and boundarying/channeling or we enact through ignorance.

    All these are frames that enact world-experiences that overlap, and they’re all us; these frames keep us from being in awareness and seeing awareness as the stuff the frames are made of-which is the way out of the self-referential self refuting trap of this frame into unity of experience…

    note: This doesnt mean everyone who’s using the surface language of systemic racism or whatever is actually at that level—for example there’s a red green alliance that uses Green language because it benefits them directly; there’s an amber-green alliance that uses green language to make their in-group good/better and make others wrong/bad.

    jordanSA•...

    i think roughly my group is inherently better than yours or something like this but I think you’re pointing to an important thing—its not super precise definition

    ethics
    sociology
    social psychology
    group dynamics
    Comments
    0
  • annabeth avatar

    Like is different than trust. I think Jordan said at an uptrust session that he misses the like button. I’m having the same feeling lately, there are posts I like that I wouldn’t necessarily say I trust. Or I want to give it some sort of that was cool but I don’t want that statement in my trust algorithm.

    But maybe that’s all for the best? Surely some not-insignificant portion of my trust isn’t in my conscious awareness, maybe feeling a sense of yes to something is functionally the same as trust.

    dara_like_saraSA•...
    I am considering that the "uptrust" is similar to reddit. "uptrust" means yeah i want more of this!” and "downtrust" means "nope, less of that." i thought i wanted a like a button initially, but i notice that I don’t have that same desire when I’m on reddit....
    social psychology
    online communities
    social media
    user interface design
    Comments
    0
  • jordan avatar

    Racism through a developmental lens. unfinished draft…
    note: I’m totally uninformed here…

    • Red: Does this benefit me?

    • Amber: My race is simply better (or worse) than yours. We perpetuate it because that’s good.

    • Orange: Racism is a thing we transcend by being worldcentric and meritocratic; we perpetuate it by constantly looking at everything through the racism lens.

    • Green: Systemic racism is everywhere (and at the root of many of our social problems); we transcend it by balancing the scales with education and programs to help the victims and stop the perpetrators; we perpetuate it by taking advantage of our privileges, ignoring it, and doing nothing.

    • Teal: Systemic racism is real, but it’s mostly an unconscious self-organizing system that’s perpetuated because of the incentives that keep things how they are. We transcend by owning our projection, and by setting up systems that reward non-racism for each level of development in the currency that level values.

    • Turquoise: We never transcend racism, it’s a construct we enact through conscious embracing and boundarying/channeling or we enact through ignorance.

    All these are frames that enact world-experiences that overlap, and they’re all us; these frames keep us from being in awareness and seeing awareness as the stuff the frames are made of-which is the way out of the self-referential self refuting trap of this frame into unity of experience…

    note: This doesnt mean everyone who’s using the surface language of systemic racism or whatever is actually at that level—for example there’s a red green alliance that uses Green language because it benefits them directly; there’s an amber-green alliance that uses green language to make their in-group good/better and make others wrong/bad.

    jordanSA•...
    i don’t think chatgpt is a good resource for developmental levels. It’s not very good at actually stepping outside of it’s own developmental lens. For examples in the other comment it assumes all the lenses dont like racism or want it to go away, whereas I think you can imagine a...
    social psychology
    artificial intelligence
    ethics in technology
    cognitive development
    Comments
    0
  • annabeth avatar

    Can we handle the truth? If UpTrust works the way it’s intended, it will make truth more accessible. But what percentage of the population currently has the capacity to face truth?

    Perhaps alongside truth, the tech will make the skills for being with the truth more accessible too. And avoidance will come in for the assist when needed?

    annabeth•...

    Yeah, agree with the A-H concept.

    As for openness to consensus truth, UpTrust seems like the kind of place that specifically draws people who already have that openness.

    social psychology
    organizational behavior
    corporate culture
    Comments
    0
  • nat avatar

    The pressure to be thoughtful. It’s an interesting thing - this feeling that I need to post something thoughtful here. I’m feeling tension emerge in my abdomen. There’s a belief that there’s a right and a wrong way to engage in this community with no clarity on what is right or wrong. Noticing that and letting that be.

    jordanSA•...

    Reading this from both of you I feel more connected to you both. Grateful, resonant, loving. Thank you!

    emotional intelligence
    social psychology
    interpersonal communication
    gratitude practices
    Comments
    0
  • jordan avatar

    Racism through a developmental lens. unfinished draft…
    note: I’m totally uninformed here…

    • Red: Does this benefit me?

    • Amber: My race is simply better (or worse) than yours. We perpetuate it because that’s good.

    • Orange: Racism is a thing we transcend by being worldcentric and meritocratic; we perpetuate it by constantly looking at everything through the racism lens.

    • Green: Systemic racism is everywhere (and at the root of many of our social problems); we transcend it by balancing the scales with education and programs to help the victims and stop the perpetrators; we perpetuate it by taking advantage of our privileges, ignoring it, and doing nothing.

    • Teal: Systemic racism is real, but it’s mostly an unconscious self-organizing system that’s perpetuated because of the incentives that keep things how they are. We transcend by owning our projection, and by setting up systems that reward non-racism for each level of development in the currency that level values.

    • Turquoise: We never transcend racism, it’s a construct we enact through conscious embracing and boundarying/channeling or we enact through ignorance.

    All these are frames that enact world-experiences that overlap, and they’re all us; these frames keep us from being in awareness and seeing awareness as the stuff the frames are made of-which is the way out of the self-referential self refuting trap of this frame into unity of experience…

    note: This doesnt mean everyone who’s using the surface language of systemic racism or whatever is actually at that level—for example there’s a red green alliance that uses Green language because it benefits them directly; there’s an amber-green alliance that uses green language to make their in-group good/better and make others wrong/bad.

    blasomenessphemy•...
    Upvoting: I love the use of the word "alliance". One image-thought jumble that jumped in as I was reading Turquoise: Racism is this beautiful sign post that allows me to see the precipice of my not-seeing…so it’s a gateway to seeing myself as awareness....
    social psychology
    cognitive psychology
    self awareness
    racism and discrimination
    Comments
    0
  • B

    Why you should post more: Everything is a mirror of everything.

    We’re all censoring most of our awareness.

    Uptrust is a currently curated community where we can actually practice thinking.

    The more I post the more direct I’m being with everyone in my life. (I have an embedded belief that if I do anything anywhere then I should that anything in more everywheres…but I still curate).

    Post about why you’re not posting.

    Post about questions you’re asking yourself.

    Post about your anxiety.

    Post to express art.

    This shit won’t last, this fun safe newborn ward. Use this time now to try something. Create a fake name or another account so you can try it from anonymity.

    Huge opportunity to bust out of our norms.

    blasomenessphemy•...

    I can hope. I have a sense that the bigger something gets, like the Flow sessions we run, there is an actual effect of it’s dicier to jump in.

    psychology
    social psychology
    group dynamics
    Comments
    0
Loading related tags...